00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 90 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3268 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.012 using credential 00000000-0000-0000-0000-000000000002 00:00:00.014 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.027 Fetching changes from the remote Git repository 00:00:00.029 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.048 Using shallow fetch with depth 1 00:00:00.048 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.048 > git --version # timeout=10 00:00:00.061 > git --version # 'git version 2.39.2' 00:00:00.061 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.076 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.076 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.790 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.801 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.813 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.813 > git config core.sparsecheckout # timeout=10 00:00:06.823 > git read-tree -mu HEAD # timeout=10 00:00:06.837 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.856 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.857 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.960 [Pipeline] Start of Pipeline 00:00:06.976 [Pipeline] library 00:00:06.977 Loading library shm_lib@master 00:00:06.977 Library shm_lib@master is cached. Copying from home. 00:00:06.995 [Pipeline] node 00:00:07.009 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:07.011 [Pipeline] { 00:00:07.022 [Pipeline] catchError 00:00:07.024 [Pipeline] { 00:00:07.034 [Pipeline] wrap 00:00:07.043 [Pipeline] { 00:00:07.050 [Pipeline] stage 00:00:07.052 [Pipeline] { (Prologue) 00:00:07.286 [Pipeline] sh 00:00:07.565 + logger -p user.info -t JENKINS-CI 00:00:07.585 [Pipeline] echo 00:00:07.587 Node: WFP20 00:00:07.595 [Pipeline] sh 00:00:07.892 [Pipeline] setCustomBuildProperty 00:00:07.901 [Pipeline] echo 00:00:07.902 Cleanup processes 00:00:07.906 [Pipeline] sh 00:00:08.183 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.183 3875940 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.196 [Pipeline] sh 00:00:08.478 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.478 ++ grep -v 'sudo pgrep' 00:00:08.478 ++ awk '{print $1}' 00:00:08.478 + sudo kill -9 00:00:08.478 + true 00:00:08.493 [Pipeline] cleanWs 00:00:08.503 [WS-CLEANUP] Deleting project workspace... 00:00:08.503 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.510 [WS-CLEANUP] done 00:00:08.514 [Pipeline] setCustomBuildProperty 00:00:08.530 [Pipeline] sh 00:00:08.812 + sudo git config --global --replace-all safe.directory '*' 00:00:08.888 [Pipeline] httpRequest 00:00:08.921 [Pipeline] echo 00:00:08.923 Sorcerer 10.211.164.101 is alive 00:00:08.931 [Pipeline] httpRequest 00:00:08.935 HttpMethod: GET 00:00:08.935 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.936 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.951 Response Code: HTTP/1.1 200 OK 00:00:08.951 Success: Status code 200 is in the accepted range: 200,404 00:00:08.952 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.951 [Pipeline] sh 00:00:12.232 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.242 [Pipeline] httpRequest 00:00:12.264 [Pipeline] echo 00:00:12.265 Sorcerer 10.211.164.101 is alive 00:00:12.270 [Pipeline] httpRequest 00:00:12.274 HttpMethod: GET 00:00:12.274 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:12.275 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:12.294 Response Code: HTTP/1.1 200 OK 00:00:12.294 Success: Status code 200 is in the accepted range: 200,404 00:00:12.294 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:24.735 [Pipeline] sh 00:01:25.081 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:27.627 [Pipeline] sh 00:01:27.910 + git -C spdk log --oneline -n5 00:01:27.910 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:27.910 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:27.910 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:27.910 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:27.910 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:27.930 [Pipeline] withCredentials 00:01:27.941 > git --version # timeout=10 00:01:27.956 > git --version # 'git version 2.39.2' 00:01:27.972 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:27.975 [Pipeline] { 00:01:27.985 [Pipeline] retry 00:01:27.987 [Pipeline] { 00:01:28.006 [Pipeline] sh 00:01:28.288 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:28.300 [Pipeline] } 00:01:28.323 [Pipeline] // retry 00:01:28.328 [Pipeline] } 00:01:28.350 [Pipeline] // withCredentials 00:01:28.360 [Pipeline] httpRequest 00:01:28.389 [Pipeline] echo 00:01:28.391 Sorcerer 10.211.164.101 is alive 00:01:28.400 [Pipeline] httpRequest 00:01:28.405 HttpMethod: GET 00:01:28.405 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.406 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.419 Response Code: HTTP/1.1 200 OK 00:01:28.420 Success: Status code 200 is in the accepted range: 200,404 00:01:28.420 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.269 [Pipeline] sh 00:01:38.551 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.939 [Pipeline] sh 00:01:40.219 + git -C dpdk log --oneline -n5 00:01:40.219 caf0f5d395 version: 22.11.4 00:01:40.219 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.219 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.219 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.219 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.230 [Pipeline] } 00:01:40.247 [Pipeline] // stage 00:01:40.257 [Pipeline] stage 00:01:40.259 [Pipeline] { (Prepare) 00:01:40.283 [Pipeline] writeFile 00:01:40.301 [Pipeline] sh 00:01:40.582 + logger -p user.info -t JENKINS-CI 00:01:40.595 [Pipeline] sh 00:01:40.876 + logger -p user.info -t JENKINS-CI 00:01:40.889 [Pipeline] sh 00:01:41.170 + cat autorun-spdk.conf 00:01:41.170 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.170 SPDK_RUN_UBSAN=1 00:01:41.170 SPDK_TEST_FUZZER=1 00:01:41.170 SPDK_TEST_FUZZER_SHORT=1 00:01:41.170 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.170 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:41.176 RUN_NIGHTLY=1 00:01:41.182 [Pipeline] readFile 00:01:41.212 [Pipeline] withEnv 00:01:41.214 [Pipeline] { 00:01:41.230 [Pipeline] sh 00:01:41.512 + set -ex 00:01:41.512 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:41.512 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:41.512 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.512 ++ SPDK_RUN_UBSAN=1 00:01:41.512 ++ SPDK_TEST_FUZZER=1 00:01:41.512 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:41.512 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.512 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:41.512 ++ RUN_NIGHTLY=1 00:01:41.512 + case $SPDK_TEST_NVMF_NICS in 00:01:41.512 + DRIVERS= 00:01:41.512 + [[ -n '' ]] 00:01:41.512 + exit 0 00:01:41.521 [Pipeline] } 00:01:41.542 [Pipeline] // withEnv 00:01:41.549 [Pipeline] } 00:01:41.566 [Pipeline] // stage 00:01:41.577 [Pipeline] catchError 00:01:41.579 [Pipeline] { 00:01:41.597 [Pipeline] timeout 00:01:41.597 Timeout set to expire in 30 min 00:01:41.598 [Pipeline] { 00:01:41.615 [Pipeline] stage 00:01:41.617 [Pipeline] { (Tests) 00:01:41.635 [Pipeline] sh 00:01:41.918 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:41.918 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:41.918 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:41.918 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:41.918 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:41.918 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:41.918 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:41.918 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:41.918 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:41.918 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:41.918 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:41.918 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:41.918 + source /etc/os-release 00:01:41.918 ++ NAME='Fedora Linux' 00:01:41.918 ++ VERSION='38 (Cloud Edition)' 00:01:41.918 ++ ID=fedora 00:01:41.918 ++ VERSION_ID=38 00:01:41.918 ++ VERSION_CODENAME= 00:01:41.918 ++ PLATFORM_ID=platform:f38 00:01:41.918 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:41.918 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.918 ++ LOGO=fedora-logo-icon 00:01:41.918 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:41.918 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.918 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:41.918 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.918 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.918 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.918 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:41.918 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.918 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:41.918 ++ SUPPORT_END=2024-05-14 00:01:41.918 ++ VARIANT='Cloud Edition' 00:01:41.918 ++ VARIANT_ID=cloud 00:01:41.918 + uname -a 00:01:41.918 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:41.918 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:45.209 Hugepages 00:01:45.209 node hugesize free / total 00:01:45.209 node0 1048576kB 0 / 0 00:01:45.209 node0 2048kB 0 / 0 00:01:45.209 node1 1048576kB 0 / 0 00:01:45.209 node1 2048kB 0 / 0 00:01:45.209 00:01:45.209 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:45.209 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:45.209 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:45.209 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:45.209 + rm -f /tmp/spdk-ld-path 00:01:45.209 + source autorun-spdk.conf 00:01:45.209 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.209 ++ SPDK_RUN_UBSAN=1 00:01:45.209 ++ SPDK_TEST_FUZZER=1 00:01:45.209 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:45.209 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:45.209 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:45.209 ++ RUN_NIGHTLY=1 00:01:45.209 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:45.209 + [[ -n '' ]] 00:01:45.209 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:45.209 + for M in /var/spdk/build-*-manifest.txt 00:01:45.209 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:45.209 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:45.209 + for M in /var/spdk/build-*-manifest.txt 00:01:45.209 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:45.209 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:45.209 ++ uname 00:01:45.209 + [[ Linux == \L\i\n\u\x ]] 00:01:45.209 + sudo dmesg -T 00:01:45.209 + sudo dmesg --clear 00:01:45.209 + dmesg_pid=3876875 00:01:45.209 + [[ Fedora Linux == FreeBSD ]] 00:01:45.209 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.209 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.209 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:45.209 + [[ -x /usr/src/fio-static/fio ]] 00:01:45.209 + sudo dmesg -Tw 00:01:45.209 + export FIO_BIN=/usr/src/fio-static/fio 00:01:45.209 + FIO_BIN=/usr/src/fio-static/fio 00:01:45.209 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:45.209 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:45.209 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:45.209 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:45.209 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:45.209 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:45.209 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:45.209 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:45.209 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:45.209 Test configuration: 00:01:45.209 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.209 SPDK_RUN_UBSAN=1 00:01:45.209 SPDK_TEST_FUZZER=1 00:01:45.209 SPDK_TEST_FUZZER_SHORT=1 00:01:45.209 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:45.209 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:45.210 RUN_NIGHTLY=1 21:02:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:45.210 21:02:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:45.210 21:02:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:45.210 21:02:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:45.210 21:02:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.210 21:02:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.210 21:02:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.210 21:02:42 -- paths/export.sh@5 -- $ export PATH 00:01:45.210 21:02:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.210 21:02:42 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:45.210 21:02:42 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:45.210 21:02:42 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720983762.XXXXXX 00:01:45.210 21:02:42 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720983762.mdEygQ 00:01:45.210 21:02:42 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:45.210 21:02:42 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:45.210 21:02:42 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:45.210 21:02:42 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:01:45.210 21:02:42 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:45.210 21:02:42 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:45.210 21:02:42 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:45.210 21:02:42 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:45.210 21:02:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.210 21:02:42 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:01:45.210 21:02:42 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:45.210 21:02:42 -- pm/common@17 -- $ local monitor 00:01:45.210 21:02:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.210 21:02:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.210 21:02:42 -- pm/common@21 -- $ date +%s 00:01:45.210 21:02:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.210 21:02:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.210 21:02:42 -- pm/common@21 -- $ date +%s 00:01:45.210 21:02:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720983762 00:01:45.210 21:02:42 -- pm/common@25 -- $ sleep 1 00:01:45.210 21:02:42 -- pm/common@21 -- $ date +%s 00:01:45.210 21:02:42 -- pm/common@21 -- $ date +%s 00:01:45.210 21:02:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720983762 00:01:45.210 21:02:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720983762 00:01:45.210 21:02:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720983762 00:01:45.469 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720983762_collect-cpu-load.pm.log 00:01:45.469 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720983762_collect-vmstat.pm.log 00:01:45.469 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720983762_collect-cpu-temp.pm.log 00:01:45.469 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720983762_collect-bmc-pm.bmc.pm.log 00:01:46.407 21:02:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:46.407 21:02:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:46.407 21:02:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:46.407 21:02:43 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:46.407 21:02:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:46.408 Sun Jul 14 07:02:43 PM UTC 2024 00:01:46.408 21:02:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:46.408 v24.05-13-g5fa2f5086 00:01:46.408 21:02:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:46.408 21:02:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:46.408 21:02:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:46.408 21:02:43 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:46.408 21:02:43 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:46.408 21:02:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.408 ************************************ 00:01:46.408 START TEST ubsan 00:01:46.408 ************************************ 00:01:46.408 21:02:43 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:46.408 using ubsan 00:01:46.408 00:01:46.408 real 0m0.001s 00:01:46.408 user 0m0.001s 00:01:46.408 sys 0m0.000s 00:01:46.408 21:02:43 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:46.408 21:02:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:46.408 ************************************ 00:01:46.408 END TEST ubsan 00:01:46.408 ************************************ 00:01:46.408 21:02:43 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:46.408 21:02:43 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:46.408 21:02:43 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:46.408 21:02:43 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:46.408 21:02:43 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:46.408 21:02:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.408 ************************************ 00:01:46.408 START TEST build_native_dpdk 00:01:46.408 ************************************ 00:01:46.408 21:02:43 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk log --oneline -n 5 00:01:46.408 caf0f5d395 version: 22.11.4 00:01:46.408 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:46.408 dc9c799c7d vhost: fix missing spinlock unlock 00:01:46.408 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:46.408 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:46.408 21:02:43 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:46.408 patching file config/rte_config.h 00:01:46.408 Hunk #1 succeeded at 60 (offset 1 line). 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:46.408 21:02:43 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:46.668 21:02:43 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:46.668 21:02:43 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:46.668 21:02:43 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:50.863 The Meson build system 00:01:50.863 Version: 1.3.1 00:01:50.863 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:50.863 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp 00:01:50.863 Build type: native build 00:01:50.863 Program cat found: YES (/usr/bin/cat) 00:01:50.863 Project name: DPDK 00:01:50.863 Project version: 22.11.4 00:01:50.863 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.863 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:50.863 Host machine cpu family: x86_64 00:01:50.863 Host machine cpu: x86_64 00:01:50.863 Message: ## Building in Developer Mode ## 00:01:50.863 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.863 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:50.863 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.863 Program objdump found: YES (/usr/bin/objdump) 00:01:50.863 Program python3 found: YES (/usr/bin/python3) 00:01:50.864 Program cat found: YES (/usr/bin/cat) 00:01:50.864 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:50.864 Checking for size of "void *" : 8 00:01:50.864 Checking for size of "void *" : 8 (cached) 00:01:50.864 Library m found: YES 00:01:50.864 Library numa found: YES 00:01:50.864 Has header "numaif.h" : YES 00:01:50.864 Library fdt found: NO 00:01:50.864 Library execinfo found: NO 00:01:50.864 Has header "execinfo.h" : YES 00:01:50.864 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.864 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.864 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.864 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.864 Run-time dependency openssl found: YES 3.0.9 00:01:50.864 Run-time dependency libpcap found: YES 1.10.4 00:01:50.864 Has header "pcap.h" with dependency libpcap: YES 00:01:50.864 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.864 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.864 Compiler for C supports arguments -Wformat: YES 00:01:50.864 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.864 Compiler for C supports arguments -Wformat-security: NO 00:01:50.864 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.864 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.864 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.864 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.864 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.864 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.864 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.864 Compiler for C supports arguments -Wundef: YES 00:01:50.864 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.864 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.864 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.864 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.864 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.864 Compiler for C supports arguments -mavx512f: YES 00:01:50.864 Checking if "AVX512 checking" compiles: YES 00:01:50.864 Fetching value of define "__SSE4_2__" : 1 00:01:50.864 Fetching value of define "__AES__" : 1 00:01:50.864 Fetching value of define "__AVX__" : 1 00:01:50.864 Fetching value of define "__AVX2__" : 1 00:01:50.864 Fetching value of define "__AVX512BW__" : 1 00:01:50.864 Fetching value of define "__AVX512CD__" : 1 00:01:50.864 Fetching value of define "__AVX512DQ__" : 1 00:01:50.864 Fetching value of define "__AVX512F__" : 1 00:01:50.864 Fetching value of define "__AVX512VL__" : 1 00:01:50.864 Fetching value of define "__PCLMUL__" : 1 00:01:50.864 Fetching value of define "__RDRND__" : 1 00:01:50.864 Fetching value of define "__RDSEED__" : 1 00:01:50.864 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.864 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.864 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.864 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.864 Checking for function "getentropy" : YES 00:01:50.864 Message: lib/eal: Defining dependency "eal" 00:01:50.864 Message: lib/ring: Defining dependency "ring" 00:01:50.864 Message: lib/rcu: Defining dependency "rcu" 00:01:50.864 Message: lib/mempool: Defining dependency "mempool" 00:01:50.864 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.864 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.864 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:50.864 Compiler for C supports arguments -mpclmul: YES 00:01:50.864 Compiler for C supports arguments -maes: YES 00:01:50.864 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.864 Compiler for C supports arguments -mavx512bw: YES 00:01:50.864 Compiler for C supports arguments -mavx512dq: YES 00:01:50.864 Compiler for C supports arguments -mavx512vl: YES 00:01:50.864 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.864 Compiler for C supports arguments -mavx2: YES 00:01:50.864 Compiler for C supports arguments -mavx: YES 00:01:50.864 Message: lib/net: Defining dependency "net" 00:01:50.864 Message: lib/meter: Defining dependency "meter" 00:01:50.864 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.864 Message: lib/pci: Defining dependency "pci" 00:01:50.864 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.864 Message: lib/metrics: Defining dependency "metrics" 00:01:50.864 Message: lib/hash: Defining dependency "hash" 00:01:50.864 Message: lib/timer: Defining dependency "timer" 00:01:50.864 Fetching value of define "__AVX2__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.864 Message: lib/acl: Defining dependency "acl" 00:01:50.864 Message: lib/bbdev: Defining dependency "bbdev" 00:01:50.864 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:50.864 Run-time dependency libelf found: YES 0.190 00:01:50.864 Message: lib/bpf: Defining dependency "bpf" 00:01:50.864 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:50.864 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.864 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.864 Message: lib/distributor: Defining dependency "distributor" 00:01:50.864 Message: lib/efd: Defining dependency "efd" 00:01:50.864 Message: lib/eventdev: Defining dependency "eventdev" 00:01:50.864 Message: lib/gpudev: Defining dependency "gpudev" 00:01:50.864 Message: lib/gro: Defining dependency "gro" 00:01:50.864 Message: lib/gso: Defining dependency "gso" 00:01:50.864 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:50.864 Message: lib/jobstats: Defining dependency "jobstats" 00:01:50.864 Message: lib/latencystats: Defining dependency "latencystats" 00:01:50.864 Message: lib/lpm: Defining dependency "lpm" 00:01:50.864 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:50.864 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:50.864 Message: lib/member: Defining dependency "member" 00:01:50.864 Message: lib/pcapng: Defining dependency "pcapng" 00:01:50.864 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.864 Message: lib/power: Defining dependency "power" 00:01:50.864 Message: lib/rawdev: Defining dependency "rawdev" 00:01:50.864 Message: lib/regexdev: Defining dependency "regexdev" 00:01:50.864 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.864 Message: lib/rib: Defining dependency "rib" 00:01:50.864 Message: lib/reorder: Defining dependency "reorder" 00:01:50.864 Message: lib/sched: Defining dependency "sched" 00:01:50.864 Message: lib/security: Defining dependency "security" 00:01:50.864 Message: lib/stack: Defining dependency "stack" 00:01:50.864 Has header "linux/userfaultfd.h" : YES 00:01:50.864 Message: lib/vhost: Defining dependency "vhost" 00:01:50.864 Message: lib/ipsec: Defining dependency "ipsec" 00:01:50.864 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.864 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.864 Message: lib/fib: Defining dependency "fib" 00:01:50.864 Message: lib/port: Defining dependency "port" 00:01:50.864 Message: lib/pdump: Defining dependency "pdump" 00:01:50.864 Message: lib/table: Defining dependency "table" 00:01:50.864 Message: lib/pipeline: Defining dependency "pipeline" 00:01:50.864 Message: lib/graph: Defining dependency "graph" 00:01:50.864 Message: lib/node: Defining dependency "node" 00:01:50.864 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.864 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.864 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.864 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.864 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:50.864 Compiler for C supports arguments -Wno-unused-value: YES 00:01:50.864 Compiler for C supports arguments -Wno-format: YES 00:01:50.864 Compiler for C supports arguments -Wno-format-security: YES 00:01:50.864 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:51.433 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:51.433 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:51.433 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:51.433 Fetching value of define "__AVX2__" : 1 (cached) 00:01:51.433 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.433 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.433 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.433 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:51.433 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:51.433 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:51.433 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.433 Configuring doxy-api.conf using configuration 00:01:51.433 Program sphinx-build found: NO 00:01:51.433 Configuring rte_build_config.h using configuration 00:01:51.433 Message: 00:01:51.433 ================= 00:01:51.433 Applications Enabled 00:01:51.433 ================= 00:01:51.433 00:01:51.433 apps: 00:01:51.433 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:51.433 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:51.433 test-security-perf, 00:01:51.433 00:01:51.433 Message: 00:01:51.433 ================= 00:01:51.433 Libraries Enabled 00:01:51.433 ================= 00:01:51.433 00:01:51.433 libs: 00:01:51.433 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:51.433 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:51.433 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:51.433 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:51.433 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:51.433 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:51.433 table, pipeline, graph, node, 00:01:51.433 00:01:51.433 Message: 00:01:51.433 =============== 00:01:51.433 Drivers Enabled 00:01:51.433 =============== 00:01:51.433 00:01:51.433 common: 00:01:51.433 00:01:51.433 bus: 00:01:51.433 pci, vdev, 00:01:51.433 mempool: 00:01:51.433 ring, 00:01:51.433 dma: 00:01:51.433 00:01:51.433 net: 00:01:51.433 i40e, 00:01:51.433 raw: 00:01:51.433 00:01:51.433 crypto: 00:01:51.433 00:01:51.433 compress: 00:01:51.433 00:01:51.433 regex: 00:01:51.433 00:01:51.433 vdpa: 00:01:51.433 00:01:51.433 event: 00:01:51.433 00:01:51.433 baseband: 00:01:51.433 00:01:51.433 gpu: 00:01:51.433 00:01:51.433 00:01:51.433 Message: 00:01:51.433 ================= 00:01:51.433 Content Skipped 00:01:51.433 ================= 00:01:51.433 00:01:51.433 apps: 00:01:51.433 00:01:51.433 libs: 00:01:51.433 kni: explicitly disabled via build config (deprecated lib) 00:01:51.433 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:51.433 00:01:51.433 drivers: 00:01:51.433 common/cpt: not in enabled drivers build config 00:01:51.433 common/dpaax: not in enabled drivers build config 00:01:51.433 common/iavf: not in enabled drivers build config 00:01:51.433 common/idpf: not in enabled drivers build config 00:01:51.433 common/mvep: not in enabled drivers build config 00:01:51.433 common/octeontx: not in enabled drivers build config 00:01:51.433 bus/auxiliary: not in enabled drivers build config 00:01:51.433 bus/dpaa: not in enabled drivers build config 00:01:51.433 bus/fslmc: not in enabled drivers build config 00:01:51.433 bus/ifpga: not in enabled drivers build config 00:01:51.433 bus/vmbus: not in enabled drivers build config 00:01:51.433 common/cnxk: not in enabled drivers build config 00:01:51.433 common/mlx5: not in enabled drivers build config 00:01:51.433 common/qat: not in enabled drivers build config 00:01:51.433 common/sfc_efx: not in enabled drivers build config 00:01:51.433 mempool/bucket: not in enabled drivers build config 00:01:51.433 mempool/cnxk: not in enabled drivers build config 00:01:51.433 mempool/dpaa: not in enabled drivers build config 00:01:51.433 mempool/dpaa2: not in enabled drivers build config 00:01:51.433 mempool/octeontx: not in enabled drivers build config 00:01:51.433 mempool/stack: not in enabled drivers build config 00:01:51.433 dma/cnxk: not in enabled drivers build config 00:01:51.433 dma/dpaa: not in enabled drivers build config 00:01:51.433 dma/dpaa2: not in enabled drivers build config 00:01:51.433 dma/hisilicon: not in enabled drivers build config 00:01:51.433 dma/idxd: not in enabled drivers build config 00:01:51.433 dma/ioat: not in enabled drivers build config 00:01:51.433 dma/skeleton: not in enabled drivers build config 00:01:51.433 net/af_packet: not in enabled drivers build config 00:01:51.433 net/af_xdp: not in enabled drivers build config 00:01:51.433 net/ark: not in enabled drivers build config 00:01:51.433 net/atlantic: not in enabled drivers build config 00:01:51.433 net/avp: not in enabled drivers build config 00:01:51.433 net/axgbe: not in enabled drivers build config 00:01:51.433 net/bnx2x: not in enabled drivers build config 00:01:51.433 net/bnxt: not in enabled drivers build config 00:01:51.433 net/bonding: not in enabled drivers build config 00:01:51.433 net/cnxk: not in enabled drivers build config 00:01:51.433 net/cxgbe: not in enabled drivers build config 00:01:51.433 net/dpaa: not in enabled drivers build config 00:01:51.433 net/dpaa2: not in enabled drivers build config 00:01:51.433 net/e1000: not in enabled drivers build config 00:01:51.433 net/ena: not in enabled drivers build config 00:01:51.433 net/enetc: not in enabled drivers build config 00:01:51.433 net/enetfec: not in enabled drivers build config 00:01:51.433 net/enic: not in enabled drivers build config 00:01:51.433 net/failsafe: not in enabled drivers build config 00:01:51.433 net/fm10k: not in enabled drivers build config 00:01:51.433 net/gve: not in enabled drivers build config 00:01:51.433 net/hinic: not in enabled drivers build config 00:01:51.433 net/hns3: not in enabled drivers build config 00:01:51.433 net/iavf: not in enabled drivers build config 00:01:51.433 net/ice: not in enabled drivers build config 00:01:51.433 net/idpf: not in enabled drivers build config 00:01:51.433 net/igc: not in enabled drivers build config 00:01:51.433 net/ionic: not in enabled drivers build config 00:01:51.433 net/ipn3ke: not in enabled drivers build config 00:01:51.433 net/ixgbe: not in enabled drivers build config 00:01:51.433 net/kni: not in enabled drivers build config 00:01:51.433 net/liquidio: not in enabled drivers build config 00:01:51.433 net/mana: not in enabled drivers build config 00:01:51.433 net/memif: not in enabled drivers build config 00:01:51.433 net/mlx4: not in enabled drivers build config 00:01:51.433 net/mlx5: not in enabled drivers build config 00:01:51.433 net/mvneta: not in enabled drivers build config 00:01:51.433 net/mvpp2: not in enabled drivers build config 00:01:51.433 net/netvsc: not in enabled drivers build config 00:01:51.433 net/nfb: not in enabled drivers build config 00:01:51.433 net/nfp: not in enabled drivers build config 00:01:51.433 net/ngbe: not in enabled drivers build config 00:01:51.433 net/null: not in enabled drivers build config 00:01:51.433 net/octeontx: not in enabled drivers build config 00:01:51.433 net/octeon_ep: not in enabled drivers build config 00:01:51.433 net/pcap: not in enabled drivers build config 00:01:51.433 net/pfe: not in enabled drivers build config 00:01:51.433 net/qede: not in enabled drivers build config 00:01:51.433 net/ring: not in enabled drivers build config 00:01:51.433 net/sfc: not in enabled drivers build config 00:01:51.433 net/softnic: not in enabled drivers build config 00:01:51.433 net/tap: not in enabled drivers build config 00:01:51.433 net/thunderx: not in enabled drivers build config 00:01:51.433 net/txgbe: not in enabled drivers build config 00:01:51.433 net/vdev_netvsc: not in enabled drivers build config 00:01:51.433 net/vhost: not in enabled drivers build config 00:01:51.433 net/virtio: not in enabled drivers build config 00:01:51.433 net/vmxnet3: not in enabled drivers build config 00:01:51.433 raw/cnxk_bphy: not in enabled drivers build config 00:01:51.433 raw/cnxk_gpio: not in enabled drivers build config 00:01:51.433 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:51.433 raw/ifpga: not in enabled drivers build config 00:01:51.433 raw/ntb: not in enabled drivers build config 00:01:51.433 raw/skeleton: not in enabled drivers build config 00:01:51.433 crypto/armv8: not in enabled drivers build config 00:01:51.433 crypto/bcmfs: not in enabled drivers build config 00:01:51.433 crypto/caam_jr: not in enabled drivers build config 00:01:51.433 crypto/ccp: not in enabled drivers build config 00:01:51.433 crypto/cnxk: not in enabled drivers build config 00:01:51.433 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.433 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.433 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.433 crypto/mlx5: not in enabled drivers build config 00:01:51.433 crypto/mvsam: not in enabled drivers build config 00:01:51.433 crypto/nitrox: not in enabled drivers build config 00:01:51.433 crypto/null: not in enabled drivers build config 00:01:51.433 crypto/octeontx: not in enabled drivers build config 00:01:51.433 crypto/openssl: not in enabled drivers build config 00:01:51.433 crypto/scheduler: not in enabled drivers build config 00:01:51.433 crypto/uadk: not in enabled drivers build config 00:01:51.433 crypto/virtio: not in enabled drivers build config 00:01:51.433 compress/isal: not in enabled drivers build config 00:01:51.433 compress/mlx5: not in enabled drivers build config 00:01:51.433 compress/octeontx: not in enabled drivers build config 00:01:51.433 compress/zlib: not in enabled drivers build config 00:01:51.433 regex/mlx5: not in enabled drivers build config 00:01:51.433 regex/cn9k: not in enabled drivers build config 00:01:51.433 vdpa/ifc: not in enabled drivers build config 00:01:51.433 vdpa/mlx5: not in enabled drivers build config 00:01:51.433 vdpa/sfc: not in enabled drivers build config 00:01:51.433 event/cnxk: not in enabled drivers build config 00:01:51.433 event/dlb2: not in enabled drivers build config 00:01:51.433 event/dpaa: not in enabled drivers build config 00:01:51.433 event/dpaa2: not in enabled drivers build config 00:01:51.433 event/dsw: not in enabled drivers build config 00:01:51.433 event/opdl: not in enabled drivers build config 00:01:51.433 event/skeleton: not in enabled drivers build config 00:01:51.433 event/sw: not in enabled drivers build config 00:01:51.433 event/octeontx: not in enabled drivers build config 00:01:51.433 baseband/acc: not in enabled drivers build config 00:01:51.433 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:51.433 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:51.433 baseband/la12xx: not in enabled drivers build config 00:01:51.433 baseband/null: not in enabled drivers build config 00:01:51.433 baseband/turbo_sw: not in enabled drivers build config 00:01:51.433 gpu/cuda: not in enabled drivers build config 00:01:51.433 00:01:51.433 00:01:51.433 Build targets in project: 311 00:01:51.433 00:01:51.433 DPDK 22.11.4 00:01:51.433 00:01:51.433 User defined options 00:01:51.433 libdir : lib 00:01:51.433 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:51.433 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:51.433 c_link_args : 00:01:51.433 enable_docs : false 00:01:51.433 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:51.433 enable_kmods : false 00:01:51.433 machine : native 00:01:51.433 tests : false 00:01:51.434 00:01:51.434 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.434 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:51.434 21:02:48 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 00:01:51.434 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:01:51.699 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:51.699 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:51.699 [3/740] Generating lib/rte_telemetry_def with a custom command 00:01:51.699 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:51.699 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.699 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.699 [7/740] Generating lib/rte_rcu_def with a custom command 00:01:51.699 [8/740] Generating lib/rte_eal_mingw with a custom command 00:01:51.699 [9/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.699 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.699 [11/740] Generating lib/rte_eal_def with a custom command 00:01:51.699 [12/740] Generating lib/rte_ring_def with a custom command 00:01:51.699 [13/740] Generating lib/rte_ring_mingw with a custom command 00:01:51.699 [14/740] Generating lib/rte_mempool_def with a custom command 00:01:51.699 [15/740] Generating lib/rte_mempool_mingw with a custom command 00:01:51.699 [16/740] Generating lib/rte_rcu_mingw with a custom command 00:01:51.699 [17/740] Generating lib/rte_mbuf_def with a custom command 00:01:51.699 [18/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:51.699 [19/740] Generating lib/rte_net_def with a custom command 00:01:51.699 [20/740] Generating lib/rte_meter_def with a custom command 00:01:51.699 [21/740] Generating lib/rte_meter_mingw with a custom command 00:01:51.699 [22/740] Generating lib/rte_net_mingw with a custom command 00:01:51.699 [23/740] Linking static target lib/librte_kvargs.a 00:01:51.699 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.699 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.699 [26/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.699 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.699 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:51.699 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.699 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.699 [31/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.699 [32/740] Generating lib/rte_pci_def with a custom command 00:01:51.699 [33/740] Generating lib/rte_ethdev_def with a custom command 00:01:51.699 [34/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:51.699 [35/740] Generating lib/rte_pci_mingw with a custom command 00:01:51.699 [36/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.699 [37/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.699 [38/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.699 [39/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.699 [40/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.699 [41/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.699 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.699 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.699 [44/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.699 [45/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.699 [46/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:51.699 [47/740] Generating lib/rte_cmdline_def with a custom command 00:01:51.699 [48/740] Generating lib/rte_metrics_def with a custom command 00:01:51.699 [49/740] Generating lib/rte_metrics_mingw with a custom command 00:01:51.966 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.966 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.966 [52/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.966 [53/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.966 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.966 [55/740] Generating lib/rte_timer_def with a custom command 00:01:51.966 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.966 [57/740] Generating lib/rte_hash_def with a custom command 00:01:51.966 [58/740] Generating lib/rte_timer_mingw with a custom command 00:01:51.966 [59/740] Generating lib/rte_hash_mingw with a custom command 00:01:51.966 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.966 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.966 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.966 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.966 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.966 [65/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.966 [66/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.966 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.966 [68/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.966 [69/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.966 [70/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.966 [71/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.966 [72/740] Generating lib/rte_bbdev_def with a custom command 00:01:51.966 [73/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.966 [74/740] Generating lib/rte_acl_def with a custom command 00:01:51.967 [75/740] Generating lib/rte_acl_mingw with a custom command 00:01:51.967 [76/740] Generating lib/rte_bitratestats_def with a custom command 00:01:51.967 [77/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.967 [78/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:51.967 [79/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:51.967 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.967 [81/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.967 [82/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.967 [83/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.967 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.967 [85/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.967 [86/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.967 [87/740] Linking static target lib/librte_pci.a 00:01:51.967 [88/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:51.967 [89/740] Generating lib/rte_cfgfile_def with a custom command 00:01:51.967 [90/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.967 [91/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.967 [92/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.967 [93/740] Linking static target lib/librte_meter.a 00:01:51.967 [94/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.967 [95/740] Generating lib/rte_compressdev_def with a custom command 00:01:51.967 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.967 [97/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:51.967 [98/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.967 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.967 [100/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:51.967 [101/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.967 [102/740] Generating lib/rte_bpf_def with a custom command 00:01:51.967 [103/740] Generating lib/rte_bpf_mingw with a custom command 00:01:51.967 [104/740] Linking static target lib/librte_ring.a 00:01:51.967 [105/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:51.967 [106/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.967 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.967 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.967 [109/740] Generating lib/rte_cryptodev_def with a custom command 00:01:51.967 [110/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.967 [111/740] Generating lib/rte_distributor_mingw with a custom command 00:01:51.967 [112/740] Generating lib/rte_distributor_def with a custom command 00:01:51.967 [113/740] Generating lib/rte_efd_def with a custom command 00:01:51.967 [114/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.967 [115/740] Generating lib/rte_efd_mingw with a custom command 00:01:51.967 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.967 [117/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.967 [118/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.967 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.967 [120/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:51.967 [121/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.967 [122/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.967 [123/740] Generating lib/rte_eventdev_def with a custom command 00:01:51.967 [124/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:51.967 [125/740] Generating lib/rte_gpudev_def with a custom command 00:01:51.967 [126/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:51.967 [127/740] Generating lib/rte_gro_def with a custom command 00:01:51.967 [128/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.967 [129/740] Generating lib/rte_gro_mingw with a custom command 00:01:52.235 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.235 [131/740] Generating lib/rte_gso_def with a custom command 00:01:52.235 [132/740] Generating lib/rte_gso_mingw with a custom command 00:01:52.235 [133/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.235 [134/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.235 [135/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.235 [136/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:52.235 [137/740] Generating lib/rte_ip_frag_def with a custom command 00:01:52.235 [138/740] Linking target lib/librte_kvargs.so.23.0 00:01:52.235 [139/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.235 [140/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.235 [141/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.235 [142/740] Generating lib/rte_jobstats_def with a custom command 00:01:52.235 [143/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.235 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.235 [145/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.235 [146/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:52.235 [147/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.235 [148/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:52.235 [149/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.235 [150/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.235 [151/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.497 [152/740] Linking static target lib/librte_cfgfile.a 00:01:52.497 [153/740] Generating lib/rte_latencystats_def with a custom command 00:01:52.497 [154/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.497 [155/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:52.497 [156/740] Generating lib/rte_lpm_def with a custom command 00:01:52.497 [157/740] Generating lib/rte_lpm_mingw with a custom command 00:01:52.497 [158/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.497 [159/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.497 [160/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.497 [161/740] Generating lib/rte_member_def with a custom command 00:01:52.497 [162/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:52.497 [163/740] Generating lib/rte_member_mingw with a custom command 00:01:52.497 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.497 [165/740] Generating lib/rte_pcapng_def with a custom command 00:01:52.497 [166/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.497 [167/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.497 [168/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:52.497 [169/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.497 [170/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.497 [171/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:52.497 [172/740] Linking static target lib/librte_jobstats.a 00:01:52.497 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.497 [174/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.497 [175/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.497 [176/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.497 [177/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:52.497 [178/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.497 [179/740] Linking static target lib/librte_cmdline.a 00:01:52.497 [180/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.497 [181/740] Linking static target lib/librte_timer.a 00:01:52.497 [182/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.497 [183/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.497 [184/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.497 [185/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:52.497 [186/740] Generating lib/rte_power_mingw with a custom command 00:01:52.497 [187/740] Generating lib/rte_power_def with a custom command 00:01:52.497 [188/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.497 [189/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.497 [190/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:52.497 [191/740] Linking static target lib/librte_telemetry.a 00:01:52.497 [192/740] Linking static target lib/librte_metrics.a 00:01:52.497 [193/740] Generating lib/rte_rawdev_def with a custom command 00:01:52.497 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.497 [195/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:52.497 [196/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.497 [197/740] Generating lib/rte_regexdev_def with a custom command 00:01:52.497 [198/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.497 [199/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:52.497 [200/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.497 [201/740] Generating lib/rte_dmadev_def with a custom command 00:01:52.497 [202/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:52.497 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:52.497 [204/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.497 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:52.497 [206/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.497 [207/740] Generating lib/rte_rib_def with a custom command 00:01:52.497 [208/740] Generating lib/rte_rib_mingw with a custom command 00:01:52.497 [209/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.497 [210/740] Generating lib/rte_reorder_def with a custom command 00:01:52.497 [211/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.497 [212/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.497 [213/740] Generating lib/rte_reorder_mingw with a custom command 00:01:52.497 [214/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.497 [215/740] Generating lib/rte_sched_def with a custom command 00:01:52.497 [216/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.497 [217/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:52.497 [218/740] Generating lib/rte_security_def with a custom command 00:01:52.497 [219/740] Generating lib/rte_sched_mingw with a custom command 00:01:52.497 [220/740] Generating lib/rte_security_mingw with a custom command 00:01:52.497 [221/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:52.497 [222/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.760 [223/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.760 [224/740] Generating lib/rte_stack_def with a custom command 00:01:52.760 [225/740] Linking static target lib/librte_bitratestats.a 00:01:52.760 [226/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:52.760 [227/740] Generating lib/rte_stack_mingw with a custom command 00:01:52.760 [228/740] Linking static target lib/librte_net.a 00:01:52.760 [229/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.760 [230/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:52.760 [231/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.760 [232/740] Generating lib/rte_vhost_def with a custom command 00:01:52.760 [233/740] Generating lib/rte_vhost_mingw with a custom command 00:01:52.760 [234/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.760 [235/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.760 [236/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:52.760 [237/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:52.760 [238/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.760 [239/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.760 [240/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:52.761 [241/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:52.761 [242/740] Generating lib/rte_ipsec_def with a custom command 00:01:52.761 [243/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:52.761 [244/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:52.761 [245/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:52.761 [246/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:52.761 [247/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:52.761 [248/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.761 [249/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.761 [250/740] Generating lib/rte_fib_def with a custom command 00:01:52.761 [251/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:52.761 [252/740] Generating lib/rte_fib_mingw with a custom command 00:01:52.761 [253/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:52.761 [254/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:52.761 [255/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.761 [256/740] Linking static target lib/librte_stack.a 00:01:52.761 [257/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:52.761 [258/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:52.761 [259/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:52.761 [260/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:52.761 [261/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:52.761 [262/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:52.761 [263/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.761 [264/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:52.761 [265/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.761 [266/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.761 [267/740] Linking static target lib/librte_compressdev.a 00:01:52.761 [268/740] Generating lib/rte_port_def with a custom command 00:01:52.761 [269/740] Generating lib/rte_port_mingw with a custom command 00:01:52.761 [270/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:52.761 [271/740] Generating lib/rte_pdump_def with a custom command 00:01:52.761 [272/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.761 [273/740] Generating lib/rte_pdump_mingw with a custom command 00:01:53.022 [274/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [275/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [276/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:53.022 [277/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:53.022 [278/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:53.022 [279/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:53.022 [280/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:53.022 [281/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [282/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.022 [283/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:53.022 [284/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.022 [285/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:53.022 [286/740] Linking static target lib/librte_rcu.a 00:01:53.022 [287/740] Linking static target lib/librte_rawdev.a 00:01:53.022 [288/740] Linking static target lib/librte_mempool.a 00:01:53.022 [289/740] Generating lib/rte_table_mingw with a custom command 00:01:53.022 [290/740] Generating lib/rte_table_def with a custom command 00:01:53.022 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:53.022 [292/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [293/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:53.022 [294/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:53.022 [295/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:53.022 [296/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [297/740] Linking static target lib/librte_bbdev.a 00:01:53.022 [298/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:53.022 [299/740] Linking static target lib/librte_gpudev.a 00:01:53.022 [300/740] Linking static target lib/librte_gro.a 00:01:53.022 [301/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:53.022 [302/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.022 [303/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [304/740] Linking static target lib/librte_dmadev.a 00:01:53.022 [305/740] Generating lib/rte_pipeline_def with a custom command 00:01:53.022 [306/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:53.022 [307/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:53.022 [308/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.022 [309/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [310/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.022 [311/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:53.022 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:53.022 [313/740] Linking static target lib/librte_gso.a 00:01:53.022 [314/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:53.022 [315/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:53.022 [316/740] Linking target lib/librte_telemetry.so.23.0 00:01:53.022 [317/740] Linking static target lib/librte_latencystats.a 00:01:53.285 [318/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:53.285 [319/740] Generating lib/rte_graph_def with a custom command 00:01:53.285 [320/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:53.285 [321/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:53.285 [322/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:53.285 [323/740] Generating lib/rte_graph_mingw with a custom command 00:01:53.285 [324/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.285 [325/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:53.285 [326/740] Linking static target lib/librte_distributor.a 00:01:53.285 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:53.285 [328/740] Linking static target lib/librte_ip_frag.a 00:01:53.285 [329/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:53.285 [330/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.285 [331/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:53.285 [332/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:53.285 [333/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:53.285 [334/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:53.285 [335/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:53.285 [336/740] Linking static target lib/librte_regexdev.a 00:01:53.285 [337/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:53.285 [338/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:53.285 [339/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:53.285 [340/740] Generating lib/rte_node_def with a custom command 00:01:53.285 [341/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:53.285 [342/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.285 [343/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.285 [344/740] Generating lib/rte_node_mingw with a custom command 00:01:53.285 [345/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:53.545 [346/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.545 [347/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.545 [348/740] Linking static target lib/librte_eal.a 00:01:53.545 [349/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:53.545 [350/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.545 [351/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:53.545 [352/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.545 [353/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:53.545 [354/740] Linking static target lib/librte_reorder.a 00:01:53.545 [355/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.545 [356/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.545 [357/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:53.545 [358/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:53.545 [359/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.545 [360/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.545 [361/740] Linking static target lib/librte_power.a 00:01:53.545 [362/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.545 [363/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:53.545 [364/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.545 [365/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:53.545 [366/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:53.545 [367/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:53.545 [368/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:53.545 [369/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:53.545 [370/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.545 [371/740] Linking static target lib/librte_pcapng.a 00:01:53.545 [372/740] Linking static target lib/librte_security.a 00:01:53.545 [373/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:53.545 [374/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:53.545 [375/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.545 [376/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:53.545 [377/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:53.545 [378/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:53.545 [379/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.545 [380/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.545 [381/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:53.545 [382/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:53.545 [383/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:53.545 [384/740] Linking static target lib/librte_mbuf.a 00:01:53.545 [385/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:53.855 [386/740] Linking static target lib/librte_bpf.a 00:01:53.855 [387/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.855 [388/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:53.855 [389/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:53.855 [390/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:53.855 [391/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:53.855 [392/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.855 [393/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.855 [394/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:53.855 [395/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.855 [396/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:53.855 [397/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:53.855 [398/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.855 [399/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.855 [400/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:53.855 [401/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:53.855 [402/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:53.855 [403/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:53.855 [404/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:53.855 [405/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.855 [406/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:53.855 [407/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.855 [408/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:53.855 [409/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:53.855 [410/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:53.855 [411/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:53.855 [412/740] Linking static target lib/librte_rib.a 00:01:53.855 [413/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:53.855 [414/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:53.855 [415/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:53.855 [416/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:53.855 [417/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:53.855 [418/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.855 [419/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:53.855 [420/740] Linking static target lib/librte_lpm.a 00:01:53.855 [421/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.855 [422/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:53.855 [423/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:53.855 [424/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.855 [425/740] Linking static target lib/librte_graph.a 00:01:53.855 [426/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.855 [427/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:53.855 [428/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:53.855 [429/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:53.855 [430/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.155 [431/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:54.155 [432/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:54.155 [433/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:54.155 [434/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:54.155 [435/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:54.155 [436/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:54.155 [437/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:54.155 [438/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.155 [439/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:54.155 [440/740] Linking static target lib/librte_efd.a 00:01:54.155 [441/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:54.155 [442/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:54.155 [443/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:54.155 [444/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.155 [445/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:54.155 [446/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.155 [447/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.155 [448/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.155 [449/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.155 [450/740] Linking static target drivers/librte_bus_vdev.a 00:01:54.155 [451/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.155 [452/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:54.155 [453/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:54.155 [454/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.155 [455/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:54.155 [456/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:54.417 [457/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:54.417 [458/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [459/740] Linking static target lib/librte_fib.a 00:01:54.417 [460/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:54.417 [461/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [462/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:54.417 [463/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [464/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [465/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:54.417 [466/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:54.417 [467/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:54.417 [468/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [469/740] Linking static target lib/librte_pdump.a 00:01:54.417 [470/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:54.417 [471/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:54.417 [472/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [473/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.680 [474/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.680 [475/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.680 [476/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.680 [477/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:54.680 [478/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:54.680 [479/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.680 [480/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:54.680 [481/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.680 [482/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.680 [483/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.680 [484/740] Linking static target drivers/librte_bus_pci.a 00:01:54.680 [485/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:54.680 [486/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:54.680 [487/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:54.680 [488/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:54.680 [489/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:54.680 [490/740] Linking static target lib/librte_table.a 00:01:54.680 [491/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:54.680 [492/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:54.680 [493/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:54.680 [494/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:54.939 [495/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:54.939 [496/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:54.939 [497/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:54.939 [498/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:54.939 [499/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:54.939 [500/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:54.939 [501/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:54.939 [502/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:54.939 [503/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:54.939 [504/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.939 [505/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:54.939 [506/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:54.939 [507/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:54.939 [508/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.939 [509/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:54.939 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:54.939 [511/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:54.939 [512/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:54.939 [513/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:54.939 [514/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:54.939 [515/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.939 [516/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:54.939 [517/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:54.939 [518/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:54.939 [519/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:54.939 [520/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:54.939 [521/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:55.198 [522/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.198 [523/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.198 [524/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:55.198 [525/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:55.198 [526/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.198 [527/740] Linking static target lib/librte_cryptodev.a 00:01:55.198 [528/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:55.198 [529/740] Linking static target lib/librte_sched.a 00:01:55.198 [530/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:55.198 [531/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.198 [532/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:55.198 [533/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:55.198 [534/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:55.198 [535/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:55.198 [536/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:55.198 [537/740] Linking static target lib/librte_node.a 00:01:55.198 [538/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:55.198 [539/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:55.198 [540/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:55.198 [541/740] Linking static target lib/librte_ipsec.a 00:01:55.198 [542/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:55.198 [543/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:55.198 [544/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.198 [545/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.198 [546/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:55.198 [547/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.198 [548/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:55.198 [549/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.198 [550/740] Linking static target drivers/librte_mempool_ring.a 00:01:55.198 [551/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:55.456 [552/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:55.456 [553/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:55.456 [554/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.456 [555/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:55.456 [556/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:55.456 [557/740] Linking static target lib/librte_ethdev.a 00:01:55.456 [558/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:55.457 [559/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:55.457 [560/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:55.457 [561/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:55.457 [562/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:55.457 [563/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:55.457 [564/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:55.457 [565/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:55.457 [566/740] Linking static target lib/librte_member.a 00:01:55.457 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:55.457 [568/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.457 [569/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:55.457 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:55.457 [571/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:55.457 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:55.457 [573/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:55.457 [574/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:55.457 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:55.457 [576/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:55.457 [577/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:55.457 [578/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.715 [579/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:55.715 [580/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:55.715 [581/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:55.715 [582/740] Linking static target lib/librte_eventdev.a 00:01:55.715 [583/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:55.715 [584/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:55.715 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:55.715 [586/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:55.715 [587/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.715 [588/740] Linking static target lib/librte_port.a 00:01:55.715 [589/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:55.715 [590/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.715 [591/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:55.715 [592/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.715 [593/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:55.715 [594/740] Linking static target lib/librte_hash.a 00:01:55.715 [595/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:55.715 [596/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:55.715 [597/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:55.715 [598/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:55.715 [599/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:55.715 [600/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:55.974 [601/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:55.974 [602/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:55.974 [603/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:55.975 [604/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.975 [605/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:55.975 [606/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:55.975 [607/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:55.975 [608/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:56.234 [609/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:56.234 [610/740] Linking static target lib/librte_acl.a 00:01:56.234 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:56.234 [612/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:56.491 [613/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.491 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:56.750 [615/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.750 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:56.750 [617/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.750 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:57.008 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:57.008 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:57.267 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:57.835 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:57.835 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:58.094 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:58.094 [625/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.094 [626/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.354 [627/740] Linking static target drivers/librte_net_i40e.a 00:01:58.613 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.613 [629/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.872 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:58.872 [631/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.131 [632/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:59.390 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.580 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.958 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.958 [636/740] Linking static target lib/librte_vhost.a 00:02:05.526 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:05.526 [638/740] Linking static target lib/librte_pipeline.a 00:02:05.785 [639/740] Linking target app/dpdk-dumpcap 00:02:05.785 [640/740] Linking target app/dpdk-pdump 00:02:05.785 [641/740] Linking target app/dpdk-test-acl 00:02:05.785 [642/740] Linking target app/dpdk-proc-info 00:02:05.785 [643/740] Linking target app/dpdk-test-fib 00:02:05.785 [644/740] Linking target app/dpdk-test-crypto-perf 00:02:05.785 [645/740] Linking target app/dpdk-test-regex 00:02:05.785 [646/740] Linking target app/dpdk-test-cmdline 00:02:05.785 [647/740] Linking target app/dpdk-test-bbdev 00:02:05.785 [648/740] Linking target app/dpdk-test-flow-perf 00:02:05.785 [649/740] Linking target app/dpdk-test-pipeline 00:02:05.785 [650/740] Linking target app/dpdk-test-security-perf 00:02:05.785 [651/740] Linking target app/dpdk-test-compress-perf 00:02:05.785 [652/740] Linking target app/dpdk-test-gpudev 00:02:05.785 [653/740] Linking target app/dpdk-test-sad 00:02:05.785 [654/740] Linking target app/dpdk-test-eventdev 00:02:05.785 [655/740] Linking target app/dpdk-testpmd 00:02:07.159 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.159 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.417 [658/740] Linking target lib/librte_eal.so.23.0 00:02:07.417 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:07.417 [660/740] Linking target lib/librte_ring.so.23.0 00:02:07.417 [661/740] Linking target lib/librte_timer.so.23.0 00:02:07.417 [662/740] Linking target lib/librte_meter.so.23.0 00:02:07.417 [663/740] Linking target lib/librte_pci.so.23.0 00:02:07.417 [664/740] Linking target lib/librte_dmadev.so.23.0 00:02:07.417 [665/740] Linking target lib/librte_rawdev.so.23.0 00:02:07.417 [666/740] Linking target lib/librte_cfgfile.so.23.0 00:02:07.417 [667/740] Linking target lib/librte_jobstats.so.23.0 00:02:07.417 [668/740] Linking target lib/librte_stack.so.23.0 00:02:07.417 [669/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:07.417 [670/740] Linking target lib/librte_graph.so.23.0 00:02:07.417 [671/740] Linking target lib/librte_acl.so.23.0 00:02:07.676 [672/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:07.676 [673/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:07.676 [674/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:07.676 [675/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:07.676 [676/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:07.676 [677/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:07.676 [678/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:07.676 [679/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:07.676 [680/740] Linking target lib/librte_rcu.so.23.0 00:02:07.676 [681/740] Linking target lib/librte_mempool.so.23.0 00:02:07.676 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:07.935 [683/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:07.935 [684/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:07.935 [685/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:07.935 [686/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:07.935 [687/740] Linking target lib/librte_mbuf.so.23.0 00:02:07.935 [688/740] Linking target lib/librte_rib.so.23.0 00:02:07.935 [689/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:07.935 [690/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:07.935 [691/740] Linking target lib/librte_gpudev.so.23.0 00:02:07.935 [692/740] Linking target lib/librte_compressdev.so.23.0 00:02:07.935 [693/740] Linking target lib/librte_regexdev.so.23.0 00:02:07.935 [694/740] Linking target lib/librte_net.so.23.0 00:02:07.935 [695/740] Linking target lib/librte_bbdev.so.23.0 00:02:07.935 [696/740] Linking target lib/librte_distributor.so.23.0 00:02:07.935 [697/740] Linking target lib/librte_sched.so.23.0 00:02:07.935 [698/740] Linking target lib/librte_reorder.so.23.0 00:02:07.935 [699/740] Linking target lib/librte_cryptodev.so.23.0 00:02:08.193 [700/740] Linking target lib/librte_fib.so.23.0 00:02:08.193 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:08.194 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:08.194 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:08.194 [704/740] Linking target lib/librte_cmdline.so.23.0 00:02:08.194 [705/740] Linking target lib/librte_hash.so.23.0 00:02:08.194 [706/740] Linking target lib/librte_ethdev.so.23.0 00:02:08.194 [707/740] Linking target lib/librte_security.so.23.0 00:02:08.452 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:08.452 [709/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:08.452 [710/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:08.452 [711/740] Linking target lib/librte_efd.so.23.0 00:02:08.452 [712/740] Linking target lib/librte_lpm.so.23.0 00:02:08.452 [713/740] Linking target lib/librte_member.so.23.0 00:02:08.452 [714/740] Linking target lib/librte_gro.so.23.0 00:02:08.452 [715/740] Linking target lib/librte_eventdev.so.23.0 00:02:08.452 [716/740] Linking target lib/librte_metrics.so.23.0 00:02:08.452 [717/740] Linking target lib/librte_power.so.23.0 00:02:08.452 [718/740] Linking target lib/librte_pcapng.so.23.0 00:02:08.452 [719/740] Linking target lib/librte_ip_frag.so.23.0 00:02:08.452 [720/740] Linking target lib/librte_gso.so.23.0 00:02:08.452 [721/740] Linking target lib/librte_bpf.so.23.0 00:02:08.452 [722/740] Linking target lib/librte_ipsec.so.23.0 00:02:08.452 [723/740] Linking target lib/librte_vhost.so.23.0 00:02:08.452 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:08.452 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:08.452 [726/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:08.452 [727/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:08.452 [728/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:08.452 [729/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:08.711 [730/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:08.711 [731/740] Linking target lib/librte_node.so.23.0 00:02:08.711 [732/740] Linking target lib/librte_bitratestats.so.23.0 00:02:08.711 [733/740] Linking target lib/librte_latencystats.so.23.0 00:02:08.711 [734/740] Linking target lib/librte_pdump.so.23.0 00:02:08.711 [735/740] Linking target lib/librte_port.so.23.0 00:02:08.711 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:08.711 [737/740] Linking target lib/librte_table.so.23.0 00:02:08.970 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:10.348 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.348 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:10.348 21:03:07 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 install 00:02:10.348 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:02:10.348 [0/1] Installing files. 00:02:10.611 Installing subdir /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples 00:02:10.611 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.611 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.611 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:10.612 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:10.613 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:10.614 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.615 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.616 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:10.617 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.618 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:10.619 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:10.619 Installing lib/librte_kvargs.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_telemetry.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_eal.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_rcu.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_mempool.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_mbuf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_net.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_meter.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_ethdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_cmdline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_metrics.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_hash.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_timer.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_acl.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_bbdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_bpf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_compressdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_distributor.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_efd.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.619 Installing lib/librte_eventdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_gpudev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_gro.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_gso.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_jobstats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_latencystats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_lpm.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_member.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_pcapng.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_power.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_rawdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_regexdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_dmadev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.620 Installing lib/librte_rib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_reorder.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_sched.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_security.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_stack.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_vhost.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_ipsec.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_fib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_port.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_pdump.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_table.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_pipeline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_graph.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_node.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:10.882 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:10.882 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:10.882 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.882 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:10.882 Installing app/dpdk-dumpcap to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.882 Installing app/dpdk-pdump to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.882 Installing app/dpdk-proc-info to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.882 Installing app/dpdk-test-acl to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.882 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.882 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.882 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.882 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-fib to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-testpmd to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-regex to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-sad to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.883 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.884 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.885 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:02:10.886 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:02:10.886 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:10.886 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:10.886 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:10.886 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:10.886 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:10.886 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:10.886 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:10.886 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:10.886 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:10.886 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:10.886 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:10.886 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:10.886 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:10.886 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:10.886 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:10.886 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so 00:02:10.887 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:10.887 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:10.887 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:10.887 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:10.887 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:10.887 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:10.887 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:10.887 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:10.887 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:10.887 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:10.887 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:10.887 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:10.887 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:10.887 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:10.887 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:10.887 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:10.887 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:10.887 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:10.887 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:10.887 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:10.887 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:10.887 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:10.887 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:10.887 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:10.887 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:10.887 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:10.887 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:10.887 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:10.887 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:10.887 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:10.887 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:10.887 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:10.887 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:10.887 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:10.887 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:10.887 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:10.887 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:10.887 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:10.887 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:10.887 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:10.887 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:10.887 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:10.887 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:10.887 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:10.887 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:10.887 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:10.887 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:10.887 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:10.887 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:10.887 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so 00:02:10.887 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:10.887 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:10.887 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:10.887 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so 00:02:10.887 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:10.887 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:10.887 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:10.887 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:10.887 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:10.887 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:10.887 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:10.887 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:10.887 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:10.887 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:10.887 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:10.887 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:10.887 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:10.887 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so 00:02:10.887 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:10.887 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:10.887 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:10.887 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:10.887 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:10.887 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:10.887 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:10.887 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:10.887 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:10.887 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:10.887 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:10.887 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:10.887 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:10.887 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:10.887 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:10.887 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:10.887 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:10.887 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:10.887 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:10.887 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:10.887 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:10.887 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so 00:02:10.887 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:10.887 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:10.887 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:10.887 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so 00:02:10.887 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:10.887 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:10.887 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:10.887 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:10.887 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:10.887 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so 00:02:10.888 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:10.888 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:10.888 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:10.888 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:10.888 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:10.888 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:10.888 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:10.888 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:10.888 Running custom install script '/bin/sh /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:10.888 21:03:07 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:10.888 21:03:07 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:10.888 21:03:07 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:10.888 21:03:07 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:10.888 00:02:10.888 real 0m24.514s 00:02:10.888 user 6m32.999s 00:02:10.888 sys 2m10.904s 00:02:10.888 21:03:07 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:10.888 21:03:07 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:10.888 ************************************ 00:02:10.888 END TEST build_native_dpdk 00:02:10.888 ************************************ 00:02:11.147 21:03:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.147 21:03:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.147 21:03:07 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:02:11.147 21:03:07 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:02:11.147 21:03:07 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:02:11.147 21:03:07 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:11.147 21:03:07 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:11.147 21:03:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.147 ************************************ 00:02:11.147 START TEST autobuild_llvm_precompile 00:02:11.147 ************************************ 00:02:11.147 21:03:07 autobuild_llvm_precompile -- common/autotest_common.sh@1121 -- $ _llvm_precompile 00:02:11.147 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:02:11.147 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:02:11.147 Target: x86_64-redhat-linux-gnu 00:02:11.148 Thread model: posix 00:02:11.148 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:02:11.148 21:03:07 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:11.406 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:11.406 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:11.406 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:11.664 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:11.922 Using 'verbs' RDMA provider 00:02:27.751 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:39.955 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:40.240 Creating mk/config.mk...done. 00:02:40.240 Creating mk/cc.flags.mk...done. 00:02:40.240 Type 'make' to build. 00:02:40.240 00:02:40.240 real 0m29.192s 00:02:40.240 user 0m12.437s 00:02:40.240 sys 0m16.116s 00:02:40.240 21:03:37 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:40.240 21:03:37 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:40.240 ************************************ 00:02:40.240 END TEST autobuild_llvm_precompile 00:02:40.240 ************************************ 00:02:40.240 21:03:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:40.240 21:03:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:40.240 21:03:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:40.240 21:03:37 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:40.240 21:03:37 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:40.499 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:40.757 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:40.757 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:40.757 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:41.324 Using 'verbs' RDMA provider 00:02:54.460 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:06.661 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:06.661 Creating mk/config.mk...done. 00:03:06.661 Creating mk/cc.flags.mk...done. 00:03:06.661 Type 'make' to build. 00:03:06.661 21:04:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:03:06.661 21:04:02 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:06.661 21:04:02 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:06.661 21:04:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.661 ************************************ 00:03:06.661 START TEST make 00:03:06.661 ************************************ 00:03:06.661 21:04:02 make -- common/autotest_common.sh@1121 -- $ make -j112 00:03:06.661 make[1]: Nothing to be done for 'all'. 00:03:07.596 The Meson build system 00:03:07.596 Version: 1.3.1 00:03:07.596 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:03:07.596 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.596 Build type: native build 00:03:07.596 Project name: libvfio-user 00:03:07.596 Project version: 0.0.1 00:03:07.596 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:07.596 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:07.596 Host machine cpu family: x86_64 00:03:07.596 Host machine cpu: x86_64 00:03:07.596 Run-time dependency threads found: YES 00:03:07.596 Library dl found: YES 00:03:07.596 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:07.596 Run-time dependency json-c found: YES 0.17 00:03:07.596 Run-time dependency cmocka found: YES 1.1.7 00:03:07.596 Program pytest-3 found: NO 00:03:07.596 Program flake8 found: NO 00:03:07.596 Program misspell-fixer found: NO 00:03:07.596 Program restructuredtext-lint found: NO 00:03:07.596 Program valgrind found: YES (/usr/bin/valgrind) 00:03:07.596 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.596 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.596 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.596 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.596 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:07.596 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:07.596 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.596 Build targets in project: 8 00:03:07.596 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:07.596 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:07.596 00:03:07.596 libvfio-user 0.0.1 00:03:07.596 00:03:07.596 User defined options 00:03:07.596 buildtype : debug 00:03:07.596 default_library: static 00:03:07.596 libdir : /usr/local/lib 00:03:07.596 00:03:07.596 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.855 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:07.855 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:03:08.113 [2/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:03:08.113 [3/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:03:08.113 [4/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:03:08.113 [5/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:08.113 [6/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:08.113 [7/36] Compiling C object samples/null.p/null.c.o 00:03:08.113 [8/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:08.113 [9/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:03:08.113 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:08.113 [11/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:08.113 [12/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:08.113 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:08.113 [14/36] Compiling C object samples/server.p/server.c.o 00:03:08.113 [15/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:03:08.113 [16/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:03:08.113 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:03:08.113 [18/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:08.113 [19/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:08.113 [20/36] Compiling C object test/unit_tests.p/mocks.c.o 00:03:08.113 [21/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.113 [22/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:08.113 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:08.114 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.114 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.114 [26/36] Compiling C object samples/client.p/client.c.o 00:03:08.114 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:03:08.114 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.114 [29/36] Linking static target lib/libvfio-user.a 00:03:08.114 [30/36] Linking target samples/client 00:03:08.114 [31/36] Linking target test/unit_tests 00:03:08.114 [32/36] Linking target samples/gpio-pci-idio-16 00:03:08.114 [33/36] Linking target samples/null 00:03:08.114 [34/36] Linking target samples/lspci 00:03:08.114 [35/36] Linking target samples/server 00:03:08.114 [36/36] Linking target samples/shadow_ioeventfd_server 00:03:08.114 INFO: autodetecting backend as ninja 00:03:08.114 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.114 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.682 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.682 ninja: no work to do. 00:03:11.975 CC lib/log/log.o 00:03:11.975 CC lib/log/log_flags.o 00:03:11.975 CC lib/log/log_deprecated.o 00:03:11.975 CC lib/ut_mock/mock.o 00:03:11.975 CC lib/ut/ut.o 00:03:11.975 LIB libspdk_log.a 00:03:11.975 LIB libspdk_ut.a 00:03:11.975 LIB libspdk_ut_mock.a 00:03:11.975 CC lib/ioat/ioat.o 00:03:11.975 CC lib/dma/dma.o 00:03:11.975 CXX lib/trace_parser/trace.o 00:03:11.975 CC lib/util/base64.o 00:03:11.975 CC lib/util/crc16.o 00:03:11.975 CC lib/util/bit_array.o 00:03:11.975 CC lib/util/cpuset.o 00:03:11.975 CC lib/util/crc32.o 00:03:11.975 CC lib/util/crc32c.o 00:03:11.975 CC lib/util/crc32_ieee.o 00:03:11.975 CC lib/util/crc64.o 00:03:11.975 CC lib/util/dif.o 00:03:11.975 CC lib/util/fd.o 00:03:11.975 CC lib/util/file.o 00:03:11.975 CC lib/util/hexlify.o 00:03:11.975 CC lib/util/math.o 00:03:11.975 CC lib/util/iov.o 00:03:11.975 CC lib/util/pipe.o 00:03:11.975 CC lib/util/strerror_tls.o 00:03:11.975 CC lib/util/string.o 00:03:11.975 CC lib/util/uuid.o 00:03:11.975 CC lib/util/xor.o 00:03:11.975 CC lib/util/fd_group.o 00:03:11.975 CC lib/util/zipf.o 00:03:12.234 CC lib/vfio_user/host/vfio_user_pci.o 00:03:12.234 CC lib/vfio_user/host/vfio_user.o 00:03:12.234 LIB libspdk_dma.a 00:03:12.234 LIB libspdk_ioat.a 00:03:12.234 LIB libspdk_vfio_user.a 00:03:12.234 LIB libspdk_util.a 00:03:12.493 LIB libspdk_trace_parser.a 00:03:12.753 CC lib/json/json_parse.o 00:03:12.753 CC lib/json/json_write.o 00:03:12.753 CC lib/json/json_util.o 00:03:12.753 CC lib/conf/conf.o 00:03:12.753 CC lib/idxd/idxd.o 00:03:12.753 CC lib/idxd/idxd_user.o 00:03:12.753 CC lib/rdma/common.o 00:03:12.753 CC lib/idxd/idxd_kernel.o 00:03:12.753 CC lib/rdma/rdma_verbs.o 00:03:12.753 CC lib/env_dpdk/env.o 00:03:12.753 CC lib/env_dpdk/memory.o 00:03:12.753 CC lib/env_dpdk/init.o 00:03:12.753 CC lib/env_dpdk/pci.o 00:03:12.753 CC lib/env_dpdk/pci_ioat.o 00:03:12.753 CC lib/env_dpdk/pci_vmd.o 00:03:12.753 CC lib/env_dpdk/threads.o 00:03:12.753 CC lib/env_dpdk/pci_virtio.o 00:03:12.753 CC lib/vmd/vmd.o 00:03:12.753 CC lib/env_dpdk/pci_idxd.o 00:03:12.753 CC lib/env_dpdk/pci_event.o 00:03:12.753 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.753 CC lib/env_dpdk/sigbus_handler.o 00:03:12.753 CC lib/vmd/led.o 00:03:12.753 CC lib/env_dpdk/pci_dpdk.o 00:03:12.753 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.753 LIB libspdk_conf.a 00:03:12.753 LIB libspdk_json.a 00:03:12.753 LIB libspdk_rdma.a 00:03:13.012 LIB libspdk_idxd.a 00:03:13.012 LIB libspdk_vmd.a 00:03:13.271 CC lib/jsonrpc/jsonrpc_server.o 00:03:13.271 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:13.271 CC lib/jsonrpc/jsonrpc_client.o 00:03:13.271 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:13.271 LIB libspdk_jsonrpc.a 00:03:13.530 LIB libspdk_env_dpdk.a 00:03:13.530 CC lib/rpc/rpc.o 00:03:13.790 LIB libspdk_rpc.a 00:03:14.049 CC lib/trace/trace_flags.o 00:03:14.049 CC lib/trace/trace.o 00:03:14.049 CC lib/trace/trace_rpc.o 00:03:14.049 CC lib/notify/notify.o 00:03:14.049 CC lib/notify/notify_rpc.o 00:03:14.049 CC lib/keyring/keyring.o 00:03:14.049 CC lib/keyring/keyring_rpc.o 00:03:14.308 LIB libspdk_notify.a 00:03:14.309 LIB libspdk_trace.a 00:03:14.309 LIB libspdk_keyring.a 00:03:14.568 CC lib/thread/thread.o 00:03:14.568 CC lib/thread/iobuf.o 00:03:14.568 CC lib/sock/sock.o 00:03:14.568 CC lib/sock/sock_rpc.o 00:03:14.828 LIB libspdk_sock.a 00:03:15.087 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:15.087 CC lib/nvme/nvme_ctrlr.o 00:03:15.087 CC lib/nvme/nvme_ns.o 00:03:15.087 CC lib/nvme/nvme_fabric.o 00:03:15.087 CC lib/nvme/nvme_pcie_common.o 00:03:15.087 CC lib/nvme/nvme_ns_cmd.o 00:03:15.087 CC lib/nvme/nvme.o 00:03:15.087 CC lib/nvme/nvme_qpair.o 00:03:15.087 CC lib/nvme/nvme_pcie.o 00:03:15.087 CC lib/nvme/nvme_quirks.o 00:03:15.087 CC lib/nvme/nvme_transport.o 00:03:15.087 CC lib/nvme/nvme_discovery.o 00:03:15.087 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:15.087 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.087 CC lib/nvme/nvme_tcp.o 00:03:15.087 CC lib/nvme/nvme_opal.o 00:03:15.087 CC lib/nvme/nvme_io_msg.o 00:03:15.087 CC lib/nvme/nvme_poll_group.o 00:03:15.087 CC lib/nvme/nvme_zns.o 00:03:15.087 CC lib/nvme/nvme_stubs.o 00:03:15.087 CC lib/nvme/nvme_auth.o 00:03:15.087 CC lib/nvme/nvme_cuse.o 00:03:15.087 CC lib/nvme/nvme_vfio_user.o 00:03:15.087 CC lib/nvme/nvme_rdma.o 00:03:15.347 LIB libspdk_thread.a 00:03:15.606 CC lib/init/subsystem.o 00:03:15.606 CC lib/init/json_config.o 00:03:15.606 CC lib/init/rpc.o 00:03:15.606 CC lib/init/subsystem_rpc.o 00:03:15.606 CC lib/vfu_tgt/tgt_endpoint.o 00:03:15.606 CC lib/vfu_tgt/tgt_rpc.o 00:03:15.606 CC lib/blob/blobstore.o 00:03:15.606 CC lib/virtio/virtio.o 00:03:15.607 CC lib/virtio/virtio_vhost_user.o 00:03:15.607 CC lib/accel/accel.o 00:03:15.607 CC lib/blob/request.o 00:03:15.607 CC lib/virtio/virtio_vfio_user.o 00:03:15.607 CC lib/accel/accel_rpc.o 00:03:15.607 CC lib/blob/zeroes.o 00:03:15.607 CC lib/virtio/virtio_pci.o 00:03:15.607 CC lib/accel/accel_sw.o 00:03:15.607 CC lib/blob/blob_bs_dev.o 00:03:15.866 LIB libspdk_init.a 00:03:15.866 LIB libspdk_vfu_tgt.a 00:03:15.866 LIB libspdk_virtio.a 00:03:16.125 CC lib/event/app.o 00:03:16.125 CC lib/event/reactor.o 00:03:16.125 CC lib/event/log_rpc.o 00:03:16.125 CC lib/event/app_rpc.o 00:03:16.125 CC lib/event/scheduler_static.o 00:03:16.384 LIB libspdk_accel.a 00:03:16.384 LIB libspdk_event.a 00:03:16.384 LIB libspdk_nvme.a 00:03:16.642 CC lib/bdev/bdev.o 00:03:16.642 CC lib/bdev/bdev_rpc.o 00:03:16.642 CC lib/bdev/bdev_zone.o 00:03:16.642 CC lib/bdev/part.o 00:03:16.642 CC lib/bdev/scsi_nvme.o 00:03:17.209 LIB libspdk_blob.a 00:03:17.776 CC lib/blobfs/blobfs.o 00:03:17.776 CC lib/blobfs/tree.o 00:03:17.776 CC lib/lvol/lvol.o 00:03:18.034 LIB libspdk_blobfs.a 00:03:18.034 LIB libspdk_lvol.a 00:03:18.293 LIB libspdk_bdev.a 00:03:18.551 CC lib/nbd/nbd.o 00:03:18.551 CC lib/nbd/nbd_rpc.o 00:03:18.551 CC lib/scsi/dev.o 00:03:18.551 CC lib/scsi/lun.o 00:03:18.551 CC lib/scsi/port.o 00:03:18.551 CC lib/scsi/scsi.o 00:03:18.551 CC lib/scsi/scsi_bdev.o 00:03:18.551 CC lib/scsi/task.o 00:03:18.551 CC lib/nvmf/ctrlr_discovery.o 00:03:18.551 CC lib/nvmf/ctrlr.o 00:03:18.551 CC lib/scsi/scsi_pr.o 00:03:18.551 CC lib/scsi/scsi_rpc.o 00:03:18.551 CC lib/nvmf/subsystem.o 00:03:18.551 CC lib/nvmf/ctrlr_bdev.o 00:03:18.551 CC lib/nvmf/nvmf.o 00:03:18.551 CC lib/nvmf/nvmf_rpc.o 00:03:18.551 CC lib/nvmf/transport.o 00:03:18.551 CC lib/nvmf/tcp.o 00:03:18.551 CC lib/nvmf/stubs.o 00:03:18.551 CC lib/nvmf/mdns_server.o 00:03:18.551 CC lib/nvmf/vfio_user.o 00:03:18.551 CC lib/ublk/ublk.o 00:03:18.551 CC lib/nvmf/rdma.o 00:03:18.551 CC lib/ublk/ublk_rpc.o 00:03:18.551 CC lib/nvmf/auth.o 00:03:18.551 CC lib/ftl/ftl_layout.o 00:03:18.551 CC lib/ftl/ftl_core.o 00:03:18.551 CC lib/ftl/ftl_init.o 00:03:18.551 CC lib/ftl/ftl_debug.o 00:03:18.551 CC lib/ftl/ftl_io.o 00:03:18.551 CC lib/ftl/ftl_l2p.o 00:03:18.551 CC lib/ftl/ftl_sb.o 00:03:18.551 CC lib/ftl/ftl_l2p_flat.o 00:03:18.551 CC lib/ftl/ftl_nv_cache.o 00:03:18.551 CC lib/ftl/ftl_band.o 00:03:18.551 CC lib/ftl/ftl_band_ops.o 00:03:18.810 CC lib/ftl/ftl_writer.o 00:03:18.810 CC lib/ftl/ftl_rq.o 00:03:18.810 CC lib/ftl/ftl_reloc.o 00:03:18.810 CC lib/ftl/ftl_l2p_cache.o 00:03:18.810 CC lib/ftl/ftl_p2l.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.810 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.810 CC lib/ftl/utils/ftl_conf.o 00:03:18.810 CC lib/ftl/utils/ftl_mempool.o 00:03:18.810 CC lib/ftl/utils/ftl_md.o 00:03:18.810 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.810 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.810 CC lib/ftl/utils/ftl_property.o 00:03:18.810 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.810 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.810 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.810 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.810 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.810 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.810 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.810 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:18.810 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.810 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.810 CC lib/ftl/base/ftl_base_dev.o 00:03:18.810 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.810 CC lib/ftl/ftl_trace.o 00:03:18.810 LIB libspdk_nbd.a 00:03:19.069 LIB libspdk_scsi.a 00:03:19.069 LIB libspdk_ublk.a 00:03:19.328 CC lib/iscsi/conn.o 00:03:19.328 CC lib/iscsi/init_grp.o 00:03:19.328 CC lib/iscsi/iscsi.o 00:03:19.328 CC lib/iscsi/md5.o 00:03:19.328 CC lib/iscsi/param.o 00:03:19.328 CC lib/iscsi/portal_grp.o 00:03:19.328 CC lib/iscsi/tgt_node.o 00:03:19.328 CC lib/iscsi/iscsi_subsystem.o 00:03:19.328 CC lib/iscsi/iscsi_rpc.o 00:03:19.328 CC lib/iscsi/task.o 00:03:19.328 CC lib/vhost/vhost.o 00:03:19.328 CC lib/vhost/vhost_blk.o 00:03:19.328 CC lib/vhost/vhost_scsi.o 00:03:19.328 CC lib/vhost/vhost_rpc.o 00:03:19.328 CC lib/vhost/rte_vhost_user.o 00:03:19.328 LIB libspdk_ftl.a 00:03:19.896 LIB libspdk_nvmf.a 00:03:19.896 LIB libspdk_vhost.a 00:03:20.156 LIB libspdk_iscsi.a 00:03:20.415 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.675 CC module/vfu_device/vfu_virtio.o 00:03:20.675 CC module/vfu_device/vfu_virtio_blk.o 00:03:20.675 CC module/vfu_device/vfu_virtio_scsi.o 00:03:20.675 CC module/vfu_device/vfu_virtio_rpc.o 00:03:20.675 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.675 LIB libspdk_env_dpdk_rpc.a 00:03:20.675 CC module/sock/posix/posix.o 00:03:20.675 CC module/accel/ioat/accel_ioat.o 00:03:20.675 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.675 CC module/accel/iaa/accel_iaa.o 00:03:20.675 CC module/accel/iaa/accel_iaa_rpc.o 00:03:20.675 CC module/scheduler/gscheduler/gscheduler.o 00:03:20.675 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:20.675 CC module/keyring/linux/keyring.o 00:03:20.675 CC module/accel/error/accel_error.o 00:03:20.675 CC module/keyring/linux/keyring_rpc.o 00:03:20.675 CC module/accel/error/accel_error_rpc.o 00:03:20.675 CC module/keyring/file/keyring.o 00:03:20.675 CC module/blob/bdev/blob_bdev.o 00:03:20.675 CC module/keyring/file/keyring_rpc.o 00:03:20.675 CC module/accel/dsa/accel_dsa.o 00:03:20.675 CC module/accel/dsa/accel_dsa_rpc.o 00:03:20.675 LIB libspdk_scheduler_dynamic.a 00:03:20.675 LIB libspdk_scheduler_dpdk_governor.a 00:03:20.675 LIB libspdk_keyring_file.a 00:03:20.675 LIB libspdk_keyring_linux.a 00:03:20.675 LIB libspdk_scheduler_gscheduler.a 00:03:20.675 LIB libspdk_accel_ioat.a 00:03:20.675 LIB libspdk_accel_iaa.a 00:03:20.675 LIB libspdk_accel_error.a 00:03:20.934 LIB libspdk_blob_bdev.a 00:03:20.934 LIB libspdk_accel_dsa.a 00:03:20.934 LIB libspdk_vfu_device.a 00:03:21.192 LIB libspdk_sock_posix.a 00:03:21.192 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.192 CC module/bdev/error/vbdev_error.o 00:03:21.192 CC module/blobfs/bdev/blobfs_bdev.o 00:03:21.192 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.192 CC module/bdev/aio/bdev_aio_rpc.o 00:03:21.192 CC module/bdev/aio/bdev_aio.o 00:03:21.192 CC module/bdev/delay/vbdev_delay.o 00:03:21.192 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.192 CC module/bdev/malloc/bdev_malloc.o 00:03:21.192 CC module/bdev/iscsi/bdev_iscsi.o 00:03:21.192 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:21.192 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.192 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.192 CC module/bdev/gpt/gpt.o 00:03:21.192 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.192 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.192 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.192 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.192 CC module/bdev/split/vbdev_split.o 00:03:21.192 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.192 CC module/bdev/raid/bdev_raid.o 00:03:21.192 CC module/bdev/raid/bdev_raid_rpc.o 00:03:21.192 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.192 CC module/bdev/raid/bdev_raid_sb.o 00:03:21.192 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:21.192 CC module/bdev/raid/raid1.o 00:03:21.192 CC module/bdev/nvme/bdev_nvme.o 00:03:21.192 CC module/bdev/raid/raid0.o 00:03:21.192 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.192 CC module/bdev/raid/concat.o 00:03:21.192 CC module/bdev/nvme/nvme_rpc.o 00:03:21.192 CC module/bdev/nvme/bdev_mdns_client.o 00:03:21.193 CC module/bdev/ftl/bdev_ftl.o 00:03:21.193 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:21.193 CC module/bdev/nvme/vbdev_opal.o 00:03:21.193 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:21.193 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:21.193 CC module/bdev/null/bdev_null.o 00:03:21.193 CC module/bdev/null/bdev_null_rpc.o 00:03:21.193 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:21.193 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:21.193 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:21.450 LIB libspdk_blobfs_bdev.a 00:03:21.450 LIB libspdk_bdev_error.a 00:03:21.450 LIB libspdk_bdev_split.a 00:03:21.450 LIB libspdk_bdev_gpt.a 00:03:21.450 LIB libspdk_bdev_aio.a 00:03:21.450 LIB libspdk_bdev_null.a 00:03:21.450 LIB libspdk_bdev_ftl.a 00:03:21.450 LIB libspdk_bdev_passthru.a 00:03:21.450 LIB libspdk_bdev_iscsi.a 00:03:21.450 LIB libspdk_bdev_zone_block.a 00:03:21.450 LIB libspdk_bdev_delay.a 00:03:21.450 LIB libspdk_bdev_malloc.a 00:03:21.792 LIB libspdk_bdev_lvol.a 00:03:21.792 LIB libspdk_bdev_virtio.a 00:03:21.792 LIB libspdk_bdev_raid.a 00:03:22.729 LIB libspdk_bdev_nvme.a 00:03:23.296 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:23.296 CC module/event/subsystems/sock/sock.o 00:03:23.296 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:23.296 CC module/event/subsystems/scheduler/scheduler.o 00:03:23.296 CC module/event/subsystems/vmd/vmd.o 00:03:23.296 CC module/event/subsystems/keyring/keyring.o 00:03:23.296 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:23.296 CC module/event/subsystems/iobuf/iobuf.o 00:03:23.296 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:23.296 LIB libspdk_event_sock.a 00:03:23.296 LIB libspdk_event_vhost_blk.a 00:03:23.296 LIB libspdk_event_scheduler.a 00:03:23.296 LIB libspdk_event_keyring.a 00:03:23.296 LIB libspdk_event_vfu_tgt.a 00:03:23.296 LIB libspdk_event_vmd.a 00:03:23.296 LIB libspdk_event_iobuf.a 00:03:23.554 CC module/event/subsystems/accel/accel.o 00:03:23.813 LIB libspdk_event_accel.a 00:03:24.073 CC module/event/subsystems/bdev/bdev.o 00:03:24.073 LIB libspdk_event_bdev.a 00:03:24.641 CC module/event/subsystems/nbd/nbd.o 00:03:24.641 CC module/event/subsystems/ublk/ublk.o 00:03:24.641 CC module/event/subsystems/scsi/scsi.o 00:03:24.641 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:24.641 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:24.641 LIB libspdk_event_nbd.a 00:03:24.641 LIB libspdk_event_ublk.a 00:03:24.641 LIB libspdk_event_scsi.a 00:03:24.641 LIB libspdk_event_nvmf.a 00:03:24.900 CC module/event/subsystems/iscsi/iscsi.o 00:03:24.900 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:25.158 LIB libspdk_event_iscsi.a 00:03:25.158 LIB libspdk_event_vhost_scsi.a 00:03:25.419 TEST_HEADER include/spdk/accel_module.h 00:03:25.419 TEST_HEADER include/spdk/accel.h 00:03:25.419 TEST_HEADER include/spdk/assert.h 00:03:25.419 TEST_HEADER include/spdk/barrier.h 00:03:25.419 TEST_HEADER include/spdk/bdev_module.h 00:03:25.419 TEST_HEADER include/spdk/bdev.h 00:03:25.419 TEST_HEADER include/spdk/bdev_zone.h 00:03:25.419 TEST_HEADER include/spdk/base64.h 00:03:25.419 TEST_HEADER include/spdk/bit_array.h 00:03:25.419 TEST_HEADER include/spdk/blob_bdev.h 00:03:25.419 TEST_HEADER include/spdk/bit_pool.h 00:03:25.419 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:25.419 TEST_HEADER include/spdk/blobfs.h 00:03:25.419 TEST_HEADER include/spdk/blob.h 00:03:25.419 TEST_HEADER include/spdk/conf.h 00:03:25.419 TEST_HEADER include/spdk/config.h 00:03:25.419 TEST_HEADER include/spdk/cpuset.h 00:03:25.419 TEST_HEADER include/spdk/crc16.h 00:03:25.419 CC app/spdk_nvme_perf/perf.o 00:03:25.419 TEST_HEADER include/spdk/crc64.h 00:03:25.419 TEST_HEADER include/spdk/crc32.h 00:03:25.419 CC test/rpc_client/rpc_client_test.o 00:03:25.419 TEST_HEADER include/spdk/dma.h 00:03:25.419 TEST_HEADER include/spdk/dif.h 00:03:25.419 TEST_HEADER include/spdk/endian.h 00:03:25.419 TEST_HEADER include/spdk/env_dpdk.h 00:03:25.419 TEST_HEADER include/spdk/env.h 00:03:25.419 TEST_HEADER include/spdk/event.h 00:03:25.419 TEST_HEADER include/spdk/fd_group.h 00:03:25.419 TEST_HEADER include/spdk/file.h 00:03:25.419 TEST_HEADER include/spdk/fd.h 00:03:25.419 TEST_HEADER include/spdk/ftl.h 00:03:25.419 CC app/spdk_lspci/spdk_lspci.o 00:03:25.419 TEST_HEADER include/spdk/gpt_spec.h 00:03:25.419 TEST_HEADER include/spdk/histogram_data.h 00:03:25.419 TEST_HEADER include/spdk/hexlify.h 00:03:25.419 TEST_HEADER include/spdk/idxd.h 00:03:25.419 CC app/spdk_nvme_discover/discovery_aer.o 00:03:25.419 CXX app/trace/trace.o 00:03:25.419 TEST_HEADER include/spdk/idxd_spec.h 00:03:25.419 CC app/trace_record/trace_record.o 00:03:25.419 TEST_HEADER include/spdk/init.h 00:03:25.419 TEST_HEADER include/spdk/ioat_spec.h 00:03:25.419 TEST_HEADER include/spdk/ioat.h 00:03:25.419 TEST_HEADER include/spdk/iscsi_spec.h 00:03:25.419 TEST_HEADER include/spdk/jsonrpc.h 00:03:25.419 TEST_HEADER include/spdk/json.h 00:03:25.419 CC app/spdk_nvme_identify/identify.o 00:03:25.419 TEST_HEADER include/spdk/keyring.h 00:03:25.419 CC app/spdk_top/spdk_top.o 00:03:25.419 TEST_HEADER include/spdk/keyring_module.h 00:03:25.419 TEST_HEADER include/spdk/likely.h 00:03:25.419 TEST_HEADER include/spdk/log.h 00:03:25.419 TEST_HEADER include/spdk/lvol.h 00:03:25.419 TEST_HEADER include/spdk/memory.h 00:03:25.419 TEST_HEADER include/spdk/mmio.h 00:03:25.419 TEST_HEADER include/spdk/nbd.h 00:03:25.419 TEST_HEADER include/spdk/notify.h 00:03:25.419 TEST_HEADER include/spdk/nvme.h 00:03:25.419 TEST_HEADER include/spdk/nvme_intel.h 00:03:25.419 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:25.419 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:25.419 TEST_HEADER include/spdk/nvme_spec.h 00:03:25.419 TEST_HEADER include/spdk/nvme_zns.h 00:03:25.419 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.419 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.419 CC app/nvmf_tgt/nvmf_main.o 00:03:25.419 TEST_HEADER include/spdk/nvmf.h 00:03:25.419 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.419 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.419 TEST_HEADER include/spdk/opal.h 00:03:25.419 TEST_HEADER include/spdk/pci_ids.h 00:03:25.419 TEST_HEADER include/spdk/opal_spec.h 00:03:25.419 TEST_HEADER include/spdk/pipe.h 00:03:25.419 TEST_HEADER include/spdk/queue.h 00:03:25.419 TEST_HEADER include/spdk/reduce.h 00:03:25.419 TEST_HEADER include/spdk/rpc.h 00:03:25.419 TEST_HEADER include/spdk/scsi.h 00:03:25.419 TEST_HEADER include/spdk/scheduler.h 00:03:25.419 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.419 TEST_HEADER include/spdk/sock.h 00:03:25.419 CC app/iscsi_tgt/iscsi_tgt.o 00:03:25.419 TEST_HEADER include/spdk/stdinc.h 00:03:25.419 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.419 TEST_HEADER include/spdk/thread.h 00:03:25.419 TEST_HEADER include/spdk/string.h 00:03:25.419 TEST_HEADER include/spdk/trace.h 00:03:25.419 TEST_HEADER include/spdk/trace_parser.h 00:03:25.419 TEST_HEADER include/spdk/tree.h 00:03:25.419 TEST_HEADER include/spdk/ublk.h 00:03:25.419 TEST_HEADER include/spdk/util.h 00:03:25.419 TEST_HEADER include/spdk/uuid.h 00:03:25.419 TEST_HEADER include/spdk/version.h 00:03:25.419 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.419 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.419 CC app/vhost/vhost.o 00:03:25.419 TEST_HEADER include/spdk/vmd.h 00:03:25.419 TEST_HEADER include/spdk/vhost.h 00:03:25.419 TEST_HEADER include/spdk/zipf.h 00:03:25.419 TEST_HEADER include/spdk/xor.h 00:03:25.419 CXX test/cpp_headers/accel.o 00:03:25.419 CXX test/cpp_headers/accel_module.o 00:03:25.419 CXX test/cpp_headers/assert.o 00:03:25.419 CXX test/cpp_headers/barrier.o 00:03:25.419 CXX test/cpp_headers/bdev.o 00:03:25.419 CXX test/cpp_headers/base64.o 00:03:25.419 CXX test/cpp_headers/bdev_module.o 00:03:25.419 CXX test/cpp_headers/bdev_zone.o 00:03:25.419 CXX test/cpp_headers/bit_array.o 00:03:25.419 CXX test/cpp_headers/bit_pool.o 00:03:25.419 CXX test/cpp_headers/blob_bdev.o 00:03:25.419 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.419 CXX test/cpp_headers/blobfs.o 00:03:25.419 CXX test/cpp_headers/blob.o 00:03:25.419 CXX test/cpp_headers/conf.o 00:03:25.419 CXX test/cpp_headers/config.o 00:03:25.419 CXX test/cpp_headers/crc16.o 00:03:25.419 CXX test/cpp_headers/cpuset.o 00:03:25.419 CXX test/cpp_headers/crc32.o 00:03:25.419 CXX test/cpp_headers/crc64.o 00:03:25.419 CXX test/cpp_headers/dif.o 00:03:25.419 CXX test/cpp_headers/dma.o 00:03:25.419 CXX test/cpp_headers/endian.o 00:03:25.419 CXX test/cpp_headers/env_dpdk.o 00:03:25.419 CXX test/cpp_headers/env.o 00:03:25.419 CXX test/cpp_headers/event.o 00:03:25.419 CXX test/cpp_headers/fd_group.o 00:03:25.419 CXX test/cpp_headers/fd.o 00:03:25.419 CXX test/cpp_headers/file.o 00:03:25.419 CXX test/cpp_headers/ftl.o 00:03:25.419 CC app/spdk_dd/spdk_dd.o 00:03:25.419 CXX test/cpp_headers/gpt_spec.o 00:03:25.419 CC app/spdk_tgt/spdk_tgt.o 00:03:25.419 CXX test/cpp_headers/hexlify.o 00:03:25.419 CC test/app/stub/stub.o 00:03:25.419 CXX test/cpp_headers/histogram_data.o 00:03:25.419 CXX test/cpp_headers/idxd.o 00:03:25.419 CXX test/cpp_headers/idxd_spec.o 00:03:25.419 CC test/event/reactor/reactor.o 00:03:25.419 CXX test/cpp_headers/init.o 00:03:25.419 CC test/app/histogram_perf/histogram_perf.o 00:03:25.419 CC test/event/event_perf/event_perf.o 00:03:25.419 CC test/nvme/overhead/overhead.o 00:03:25.419 CC test/nvme/aer/aer.o 00:03:25.419 CC test/thread/lock/spdk_lock.o 00:03:25.419 CC test/app/jsoncat/jsoncat.o 00:03:25.419 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.419 CC test/nvme/e2edp/nvme_dp.o 00:03:25.419 CC test/nvme/sgl/sgl.o 00:03:25.419 CC test/thread/poller_perf/poller_perf.o 00:03:25.419 CC test/nvme/simple_copy/simple_copy.o 00:03:25.419 CC test/nvme/err_injection/err_injection.o 00:03:25.419 CC test/nvme/boot_partition/boot_partition.o 00:03:25.419 CC test/event/reactor_perf/reactor_perf.o 00:03:25.419 CC test/nvme/connect_stress/connect_stress.o 00:03:25.419 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.419 CC examples/ioat/perf/perf.o 00:03:25.419 CC test/nvme/startup/startup.o 00:03:25.419 CC test/nvme/reset/reset.o 00:03:25.419 CC examples/sock/hello_world/hello_sock.o 00:03:25.419 CC test/event/app_repeat/app_repeat.o 00:03:25.419 CC examples/ioat/verify/verify.o 00:03:25.419 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.419 CC test/nvme/reserve/reserve.o 00:03:25.419 CC test/nvme/cuse/cuse.o 00:03:25.419 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.419 CC test/env/vtophys/vtophys.o 00:03:25.419 CC test/nvme/compliance/nvme_compliance.o 00:03:25.419 CC examples/accel/perf/accel_perf.o 00:03:25.419 CC examples/util/zipf/zipf.o 00:03:25.419 CC test/nvme/fdp/fdp.o 00:03:25.419 CC examples/nvme/hello_world/hello_world.o 00:03:25.419 CC examples/nvme/arbitration/arbitration.o 00:03:25.419 CC examples/vmd/led/led.o 00:03:25.419 CXX test/cpp_headers/ioat.o 00:03:25.419 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.419 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.419 CC examples/nvme/reconnect/reconnect.o 00:03:25.419 CC examples/idxd/perf/perf.o 00:03:25.419 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.419 CC test/env/pci/pci_ut.o 00:03:25.685 CC app/fio/nvme/fio_plugin.o 00:03:25.685 CC examples/nvme/abort/abort.o 00:03:25.685 CC examples/nvme/hotplug/hotplug.o 00:03:25.685 CC test/app/bdev_svc/bdev_svc.o 00:03:25.685 CC test/env/memory/memory_ut.o 00:03:25.685 CC test/event/scheduler/scheduler.o 00:03:25.685 CC test/dma/test_dma/test_dma.o 00:03:25.685 CC test/accel/dif/dif.o 00:03:25.685 LINK spdk_lspci 00:03:25.685 CC test/bdev/bdevio/bdevio.o 00:03:25.685 CC examples/blob/cli/blobcli.o 00:03:25.685 CC examples/blob/hello_world/hello_blob.o 00:03:25.685 CC test/blobfs/mkfs/mkfs.o 00:03:25.685 CC examples/thread/thread/thread_ex.o 00:03:25.685 CC examples/nvmf/nvmf/nvmf.o 00:03:25.685 CC examples/bdev/hello_world/hello_bdev.o 00:03:25.685 LINK rpc_client_test 00:03:25.685 CC app/fio/bdev/fio_plugin.o 00:03:25.685 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.685 CC test/lvol/esnap/esnap.o 00:03:25.685 CC examples/bdev/bdevperf/bdevperf.o 00:03:25.685 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.685 LINK spdk_nvme_discover 00:03:25.685 LINK reactor 00:03:25.685 LINK nvmf_tgt 00:03:25.685 CXX test/cpp_headers/ioat_spec.o 00:03:25.685 CXX test/cpp_headers/iscsi_spec.o 00:03:25.685 LINK histogram_perf 00:03:25.685 CXX test/cpp_headers/json.o 00:03:25.685 LINK interrupt_tgt 00:03:25.685 CXX test/cpp_headers/jsonrpc.o 00:03:25.685 LINK event_perf 00:03:25.685 CXX test/cpp_headers/keyring.o 00:03:25.685 LINK poller_perf 00:03:25.685 LINK jsoncat 00:03:25.685 CXX test/cpp_headers/keyring_module.o 00:03:25.685 CXX test/cpp_headers/likely.o 00:03:25.685 CXX test/cpp_headers/log.o 00:03:25.685 CXX test/cpp_headers/lvol.o 00:03:25.685 LINK reactor_perf 00:03:25.685 CXX test/cpp_headers/memory.o 00:03:25.685 CXX test/cpp_headers/mmio.o 00:03:25.685 CXX test/cpp_headers/nbd.o 00:03:25.685 LINK lsvmd 00:03:25.685 CXX test/cpp_headers/notify.o 00:03:25.685 CXX test/cpp_headers/nvme.o 00:03:25.685 CXX test/cpp_headers/nvme_intel.o 00:03:25.685 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.685 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.685 CXX test/cpp_headers/nvme_spec.o 00:03:25.685 LINK spdk_trace_record 00:03:25.685 CXX test/cpp_headers/nvme_zns.o 00:03:25.685 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.685 LINK vhost 00:03:25.685 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.685 CXX test/cpp_headers/nvmf.o 00:03:25.685 CXX test/cpp_headers/nvmf_spec.o 00:03:25.685 CXX test/cpp_headers/nvmf_transport.o 00:03:25.685 LINK zipf 00:03:25.685 CXX test/cpp_headers/opal.o 00:03:25.685 LINK vtophys 00:03:25.685 LINK app_repeat 00:03:25.685 CXX test/cpp_headers/opal_spec.o 00:03:25.685 LINK led 00:03:25.685 LINK iscsi_tgt 00:03:25.685 CXX test/cpp_headers/pci_ids.o 00:03:25.685 CXX test/cpp_headers/pipe.o 00:03:25.685 LINK stub 00:03:25.685 LINK connect_stress 00:03:25.685 LINK startup 00:03:25.685 CXX test/cpp_headers/queue.o 00:03:25.685 LINK boot_partition 00:03:25.685 CXX test/cpp_headers/reduce.o 00:03:25.685 CXX test/cpp_headers/rpc.o 00:03:25.685 LINK env_dpdk_post_init 00:03:25.685 CXX test/cpp_headers/scheduler.o 00:03:25.685 LINK spdk_tgt 00:03:25.685 CXX test/cpp_headers/scsi.o 00:03:25.685 LINK doorbell_aers 00:03:25.685 LINK bdev_svc 00:03:25.685 LINK reserve 00:03:25.948 LINK err_injection 00:03:25.948 LINK verify 00:03:25.948 LINK fused_ordering 00:03:25.948 LINK ioat_perf 00:03:25.948 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:25.948 struct spdk_nvme_fdp_ruhs ruhs; 00:03:25.948 ^ 00:03:25.948 LINK pmr_persistence 00:03:25.948 LINK cmb_copy 00:03:25.948 LINK hotplug 00:03:25.948 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.948 LINK aer 00:03:25.948 CXX test/cpp_headers/scsi_spec.o 00:03:25.948 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:25.948 LINK hello_world 00:03:25.948 LINK overhead 00:03:25.948 LINK simple_copy 00:03:25.948 LINK hello_sock 00:03:25.948 LINK scheduler 00:03:25.948 LINK mkfs 00:03:25.948 LINK hello_blob 00:03:25.948 LINK thread 00:03:25.948 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:25.948 LINK nvme_dp 00:03:25.948 LINK sgl 00:03:25.948 LINK reset 00:03:25.948 LINK mem_callbacks 00:03:25.948 CXX test/cpp_headers/sock.o 00:03:25.948 CXX test/cpp_headers/string.o 00:03:25.948 CXX test/cpp_headers/stdinc.o 00:03:25.948 LINK hello_bdev 00:03:25.948 LINK fdp 00:03:25.948 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:25.948 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:25.948 CXX test/cpp_headers/thread.o 00:03:25.948 CXX test/cpp_headers/trace_parser.o 00:03:25.948 CXX test/cpp_headers/trace.o 00:03:25.948 CXX test/cpp_headers/tree.o 00:03:25.948 CXX test/cpp_headers/ublk.o 00:03:25.948 CXX test/cpp_headers/util.o 00:03:25.948 CXX test/cpp_headers/uuid.o 00:03:25.948 CXX test/cpp_headers/version.o 00:03:25.948 LINK arbitration 00:03:25.948 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.948 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.948 CXX test/cpp_headers/vhost.o 00:03:25.948 CXX test/cpp_headers/vmd.o 00:03:25.948 CXX test/cpp_headers/xor.o 00:03:25.948 CXX test/cpp_headers/zipf.o 00:03:25.948 LINK test_dma 00:03:25.948 LINK nvmf 00:03:25.948 LINK spdk_trace 00:03:26.209 LINK idxd_perf 00:03:26.209 LINK bdevio 00:03:26.209 LINK nvme_manage 00:03:26.209 LINK pci_ut 00:03:26.209 LINK reconnect 00:03:26.209 LINK abort 00:03:26.209 LINK accel_perf 00:03:26.209 LINK blobcli 00:03:26.209 LINK nvme_fuzz 00:03:26.209 LINK nvme_compliance 00:03:26.209 LINK spdk_dd 00:03:26.209 LINK dif 00:03:26.209 LINK spdk_nvme_identify 00:03:26.209 1 warning generated. 00:03:26.481 LINK llvm_vfio_fuzz 00:03:26.481 LINK spdk_bdev 00:03:26.481 LINK spdk_nvme 00:03:26.481 LINK bdevperf 00:03:26.481 LINK memory_ut 00:03:26.743 LINK vhost_fuzz 00:03:26.743 LINK spdk_top 00:03:26.743 LINK llvm_nvme_fuzz 00:03:26.743 LINK spdk_nvme_perf 00:03:26.743 LINK cuse 00:03:27.309 LINK spdk_lock 00:03:27.309 LINK iscsi_fuzz 00:03:29.841 LINK esnap 00:03:29.841 00:03:29.841 real 0m24.311s 00:03:29.841 user 4m46.316s 00:03:29.841 sys 1m58.294s 00:03:29.841 21:04:26 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:29.841 21:04:26 make -- common/autotest_common.sh@10 -- $ set +x 00:03:29.841 ************************************ 00:03:29.841 END TEST make 00:03:29.841 ************************************ 00:03:30.101 21:04:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:30.101 21:04:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:30.101 21:04:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:30.102 21:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:30.102 21:04:26 -- pm/common@44 -- $ pid=3876914 00:03:30.102 21:04:26 -- pm/common@50 -- $ kill -TERM 3876914 00:03:30.102 21:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:30.102 21:04:26 -- pm/common@44 -- $ pid=3876916 00:03:30.102 21:04:26 -- pm/common@50 -- $ kill -TERM 3876916 00:03:30.102 21:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:30.102 21:04:26 -- pm/common@44 -- $ pid=3876919 00:03:30.102 21:04:26 -- pm/common@50 -- $ kill -TERM 3876919 00:03:30.102 21:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:30.102 21:04:26 -- pm/common@44 -- $ pid=3876936 00:03:30.102 21:04:26 -- pm/common@50 -- $ sudo -E kill -TERM 3876936 00:03:30.102 21:04:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:30.102 21:04:26 -- nvmf/common.sh@7 -- # uname -s 00:03:30.102 21:04:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:30.102 21:04:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:30.102 21:04:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:30.102 21:04:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:30.102 21:04:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:30.102 21:04:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:30.102 21:04:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:30.102 21:04:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:30.102 21:04:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:30.102 21:04:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:30.102 21:04:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:30.102 21:04:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:30.102 21:04:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:30.102 21:04:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:30.102 21:04:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:30.102 21:04:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:30.102 21:04:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:30.102 21:04:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:30.102 21:04:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.102 21:04:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.102 21:04:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.102 21:04:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.102 21:04:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.102 21:04:26 -- paths/export.sh@5 -- # export PATH 00:03:30.102 21:04:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.102 21:04:26 -- nvmf/common.sh@47 -- # : 0 00:03:30.102 21:04:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:30.102 21:04:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:30.102 21:04:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:30.102 21:04:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:30.102 21:04:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:30.102 21:04:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:30.102 21:04:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:30.102 21:04:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:30.102 21:04:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:30.102 21:04:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:30.102 21:04:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:30.102 21:04:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:30.102 21:04:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:30.102 21:04:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:30.102 21:04:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:30.102 21:04:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:30.102 21:04:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:30.102 21:04:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:30.102 21:04:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:30.102 21:04:26 -- spdk/autotest.sh@48 -- # udevadm_pid=3951761 00:03:30.102 21:04:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:30.102 21:04:26 -- pm/common@17 -- # local monitor 00:03:30.102 21:04:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@21 -- # date +%s 00:03:30.102 21:04:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.102 21:04:26 -- pm/common@21 -- # date +%s 00:03:30.102 21:04:26 -- pm/common@25 -- # sleep 1 00:03:30.102 21:04:26 -- pm/common@21 -- # date +%s 00:03:30.102 21:04:26 -- pm/common@21 -- # date +%s 00:03:30.102 21:04:26 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720983866 00:03:30.102 21:04:26 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720983866 00:03:30.102 21:04:26 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720983866 00:03:30.102 21:04:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720983866 00:03:30.102 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720983866_collect-vmstat.pm.log 00:03:30.102 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720983866_collect-cpu-temp.pm.log 00:03:30.102 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720983866_collect-cpu-load.pm.log 00:03:30.361 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720983866_collect-bmc-pm.bmc.pm.log 00:03:31.300 21:04:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.300 21:04:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.300 21:04:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:31.300 21:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:31.300 21:04:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.300 21:04:27 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:31.300 21:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:31.300 21:04:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:31.300 21:04:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:31.300 21:04:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:31.300 21:04:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:31.300 21:04:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:31.300 21:04:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.300 21:04:28 -- common/autotest_common.sh@1451 -- # uname 00:03:31.300 21:04:28 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:31.300 21:04:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.300 21:04:28 -- common/autotest_common.sh@1471 -- # uname 00:03:31.300 21:04:28 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:31.300 21:04:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:31.300 21:04:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:31.300 21:04:28 -- spdk/autotest.sh@72 -- # hash lcov 00:03:31.300 21:04:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:31.300 21:04:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:31.300 21:04:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:31.300 21:04:28 -- common/autotest_common.sh@10 -- # set +x 00:03:31.300 21:04:28 -- spdk/autotest.sh@91 -- # rm -f 00:03:31.300 21:04:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.593 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:34.593 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:34.853 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:34.853 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:34.853 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:34.853 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:34.853 21:04:31 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:34.853 21:04:31 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:34.853 21:04:31 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:34.853 21:04:31 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:34.853 21:04:31 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:34.853 21:04:31 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:34.853 21:04:31 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:34.853 21:04:31 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.853 21:04:31 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:34.853 21:04:31 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:34.853 21:04:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:34.853 21:04:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:34.853 21:04:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:34.853 21:04:31 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:34.853 21:04:31 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:34.853 No valid GPT data, bailing 00:03:34.853 21:04:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.853 21:04:31 -- scripts/common.sh@391 -- # pt= 00:03:34.853 21:04:31 -- scripts/common.sh@392 -- # return 1 00:03:34.853 21:04:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:34.853 1+0 records in 00:03:34.853 1+0 records out 00:03:34.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00234336 s, 447 MB/s 00:03:34.853 21:04:31 -- spdk/autotest.sh@118 -- # sync 00:03:34.853 21:04:31 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:34.853 21:04:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:34.853 21:04:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:41.426 21:04:37 -- spdk/autotest.sh@124 -- # uname -s 00:03:41.426 21:04:37 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:41.426 21:04:37 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:41.426 21:04:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.426 21:04:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.426 21:04:37 -- common/autotest_common.sh@10 -- # set +x 00:03:41.426 ************************************ 00:03:41.426 START TEST setup.sh 00:03:41.426 ************************************ 00:03:41.426 21:04:37 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:41.426 * Looking for test storage... 00:03:41.426 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:41.426 21:04:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:41.426 21:04:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:41.426 21:04:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:41.426 21:04:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.426 21:04:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.426 21:04:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.426 ************************************ 00:03:41.426 START TEST acl 00:03:41.426 ************************************ 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:41.426 * Looking for test storage... 00:03:41.426 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:41.426 21:04:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:41.426 21:04:38 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.427 21:04:38 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:41.427 21:04:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:41.427 21:04:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:41.427 21:04:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:41.427 21:04:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:41.427 21:04:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:41.427 21:04:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.427 21:04:38 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.620 21:04:41 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:45.620 21:04:41 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:45.620 21:04:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.620 21:04:41 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:45.620 21:04:41 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.620 21:04:41 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:48.157 Hugepages 00:03:48.157 node hugesize free / total 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 00:03:48.157 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.157 21:04:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.157 21:04:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:48.157 21:04:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.157 21:04:45 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.157 21:04:45 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.158 21:04:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.158 ************************************ 00:03:48.158 START TEST denied 00:03:48.158 ************************************ 00:03:48.158 21:04:45 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:48.158 21:04:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:48.158 21:04:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:48.158 21:04:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:48.158 21:04:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.158 21:04:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:51.448 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.448 21:04:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.640 00:03:55.640 real 0m7.355s 00:03:55.640 user 0m2.172s 00:03:55.640 sys 0m4.414s 00:03:55.640 21:04:52 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.640 21:04:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:55.640 ************************************ 00:03:55.640 END TEST denied 00:03:55.640 ************************************ 00:03:55.640 21:04:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:55.640 21:04:52 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.640 21:04:52 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.640 21:04:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.640 ************************************ 00:03:55.640 START TEST allowed 00:03:55.640 ************************************ 00:03:55.640 21:04:52 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:55.640 21:04:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:55.640 21:04:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:55.640 21:04:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:55.640 21:04:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.640 21:04:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:00.978 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.978 21:04:57 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:00.978 21:04:57 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:00.978 21:04:57 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:00.978 21:04:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.978 21:04:57 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.165 00:04:05.165 real 0m8.769s 00:04:05.165 user 0m2.524s 00:04:05.165 sys 0m4.798s 00:04:05.165 21:05:01 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.165 21:05:01 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:05.165 ************************************ 00:04:05.165 END TEST allowed 00:04:05.165 ************************************ 00:04:05.165 00:04:05.165 real 0m23.146s 00:04:05.165 user 0m7.264s 00:04:05.165 sys 0m13.886s 00:04:05.165 21:05:01 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.165 21:05:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:05.165 ************************************ 00:04:05.165 END TEST acl 00:04:05.165 ************************************ 00:04:05.165 21:05:01 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:05.165 21:05:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:05.165 21:05:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.165 21:05:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.165 ************************************ 00:04:05.165 START TEST hugepages 00:04:05.165 ************************************ 00:04:05.166 21:05:01 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:05.166 * Looking for test storage... 00:04:05.166 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 40454892 kB' 'MemAvailable: 42840252 kB' 'Buffers: 12536 kB' 'Cached: 11394316 kB' 'SwapCached: 16 kB' 'Active: 9638960 kB' 'Inactive: 2354388 kB' 'Active(anon): 9163608 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589976 kB' 'Mapped: 211604 kB' 'Shmem: 8634200 kB' 'KReclaimable: 252448 kB' 'Slab: 794492 kB' 'SReclaimable: 252448 kB' 'SUnreclaim: 542044 kB' 'KernelStack: 21968 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439068 kB' 'Committed_AS: 10589356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213300 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.166 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.167 21:05:01 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:05.167 21:05:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:05.167 21:05:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.167 21:05:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.167 ************************************ 00:04:05.167 START TEST default_setup 00:04:05.167 ************************************ 00:04:05.167 21:05:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:05.167 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:05.167 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.167 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.167 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:05.167 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.167 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.168 21:05:01 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:08.447 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:08.447 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:08.447 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:08.447 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:08.447 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:08.447 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:08.447 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:08.447 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:08.448 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.823 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42630256 kB' 'MemAvailable: 45015616 kB' 'Buffers: 12536 kB' 'Cached: 11394444 kB' 'SwapCached: 16 kB' 'Active: 9653752 kB' 'Inactive: 2354388 kB' 'Active(anon): 9178400 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604036 kB' 'Mapped: 210916 kB' 'Shmem: 8634328 kB' 'KReclaimable: 252448 kB' 'Slab: 792488 kB' 'SReclaimable: 252448 kB' 'SUnreclaim: 540040 kB' 'KernelStack: 22112 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10604228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213540 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.823 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42633948 kB' 'MemAvailable: 45019308 kB' 'Buffers: 12536 kB' 'Cached: 11394448 kB' 'SwapCached: 16 kB' 'Active: 9653288 kB' 'Inactive: 2354388 kB' 'Active(anon): 9177936 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604112 kB' 'Mapped: 210792 kB' 'Shmem: 8634332 kB' 'KReclaimable: 252448 kB' 'Slab: 792400 kB' 'SReclaimable: 252448 kB' 'SUnreclaim: 539952 kB' 'KernelStack: 22128 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10604244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213540 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.824 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42636340 kB' 'MemAvailable: 45021700 kB' 'Buffers: 12536 kB' 'Cached: 11394468 kB' 'SwapCached: 16 kB' 'Active: 9652416 kB' 'Inactive: 2354388 kB' 'Active(anon): 9177064 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603068 kB' 'Mapped: 210792 kB' 'Shmem: 8634352 kB' 'KReclaimable: 252448 kB' 'Slab: 792400 kB' 'SReclaimable: 252448 kB' 'SUnreclaim: 539952 kB' 'KernelStack: 22128 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10604268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213524 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.825 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.826 nr_hugepages=1024 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.826 resv_hugepages=0 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.826 surplus_hugepages=0 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.826 anon_hugepages=0 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.826 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42648280 kB' 'MemAvailable: 45033640 kB' 'Buffers: 12536 kB' 'Cached: 11394488 kB' 'SwapCached: 16 kB' 'Active: 9652876 kB' 'Inactive: 2354388 kB' 'Active(anon): 9177524 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603508 kB' 'Mapped: 210792 kB' 'Shmem: 8634372 kB' 'KReclaimable: 252448 kB' 'Slab: 792376 kB' 'SReclaimable: 252448 kB' 'SUnreclaim: 539928 kB' 'KernelStack: 22128 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10604288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25658684 kB' 'MemUsed: 6933400 kB' 'SwapCached: 16 kB' 'Active: 3118676 kB' 'Inactive: 180704 kB' 'Active(anon): 2902056 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3049052 kB' 'Mapped: 138156 kB' 'AnonPages: 253920 kB' 'Shmem: 2651728 kB' 'KernelStack: 12216 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131416 kB' 'Slab: 379564 kB' 'SReclaimable: 131416 kB' 'SUnreclaim: 248148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.088 node0=1024 expecting 1024 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.088 00:04:10.088 real 0m5.217s 00:04:10.088 user 0m1.427s 00:04:10.088 sys 0m2.368s 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.088 21:05:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:10.088 ************************************ 00:04:10.088 END TEST default_setup 00:04:10.088 ************************************ 00:04:10.088 21:05:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:10.088 21:05:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:10.088 21:05:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:10.088 21:05:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.088 ************************************ 00:04:10.088 START TEST per_node_1G_alloc 00:04:10.088 ************************************ 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.088 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.089 21:05:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:13.366 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.366 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42675232 kB' 'MemAvailable: 45060576 kB' 'Buffers: 12536 kB' 'Cached: 11394604 kB' 'SwapCached: 16 kB' 'Active: 9652240 kB' 'Inactive: 2354388 kB' 'Active(anon): 9176888 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602480 kB' 'Mapped: 209728 kB' 'Shmem: 8634488 kB' 'KReclaimable: 252416 kB' 'Slab: 792380 kB' 'SReclaimable: 252416 kB' 'SUnreclaim: 539964 kB' 'KernelStack: 21920 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10591412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213492 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.631 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.632 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42673896 kB' 'MemAvailable: 45059240 kB' 'Buffers: 12536 kB' 'Cached: 11394608 kB' 'SwapCached: 16 kB' 'Active: 9651312 kB' 'Inactive: 2354388 kB' 'Active(anon): 9175960 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602060 kB' 'Mapped: 209596 kB' 'Shmem: 8634492 kB' 'KReclaimable: 252416 kB' 'Slab: 792380 kB' 'SReclaimable: 252416 kB' 'SUnreclaim: 539964 kB' 'KernelStack: 21920 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10591428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213492 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.633 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.634 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42674400 kB' 'MemAvailable: 45059744 kB' 'Buffers: 12536 kB' 'Cached: 11394608 kB' 'SwapCached: 16 kB' 'Active: 9651184 kB' 'Inactive: 2354388 kB' 'Active(anon): 9175832 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 601912 kB' 'Mapped: 209596 kB' 'Shmem: 8634492 kB' 'KReclaimable: 252416 kB' 'Slab: 792380 kB' 'SReclaimable: 252416 kB' 'SUnreclaim: 539964 kB' 'KernelStack: 21936 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10591452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213492 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.635 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.636 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.637 nr_hugepages=1024 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.637 resv_hugepages=0 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.637 surplus_hugepages=0 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.637 anon_hugepages=0 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42674012 kB' 'MemAvailable: 45059356 kB' 'Buffers: 12536 kB' 'Cached: 11394652 kB' 'SwapCached: 16 kB' 'Active: 9651092 kB' 'Inactive: 2354388 kB' 'Active(anon): 9175740 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 601748 kB' 'Mapped: 209596 kB' 'Shmem: 8634536 kB' 'KReclaimable: 252416 kB' 'Slab: 792380 kB' 'SReclaimable: 252416 kB' 'SUnreclaim: 539964 kB' 'KernelStack: 21920 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10591476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213492 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.637 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.638 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26726484 kB' 'MemUsed: 5865600 kB' 'SwapCached: 16 kB' 'Active: 3118728 kB' 'Inactive: 180704 kB' 'Active(anon): 2902108 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3049124 kB' 'Mapped: 137868 kB' 'AnonPages: 253504 kB' 'Shmem: 2651800 kB' 'KernelStack: 11896 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131384 kB' 'Slab: 379420 kB' 'SReclaimable: 131384 kB' 'SUnreclaim: 248036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.639 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15947528 kB' 'MemUsed: 11755620 kB' 'SwapCached: 0 kB' 'Active: 6532248 kB' 'Inactive: 2173684 kB' 'Active(anon): 6273516 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358100 kB' 'Mapped: 71728 kB' 'AnonPages: 348052 kB' 'Shmem: 5982756 kB' 'KernelStack: 10024 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121032 kB' 'Slab: 412960 kB' 'SReclaimable: 121032 kB' 'SUnreclaim: 291928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.640 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.641 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.642 node0=512 expecting 512 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:13.642 node1=512 expecting 512 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.642 00:04:13.642 real 0m3.626s 00:04:13.642 user 0m1.361s 00:04:13.642 sys 0m2.334s 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.642 21:05:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.642 ************************************ 00:04:13.642 END TEST per_node_1G_alloc 00:04:13.642 ************************************ 00:04:13.901 21:05:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:13.901 21:05:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.901 21:05:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.901 21:05:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.901 ************************************ 00:04:13.901 START TEST even_2G_alloc 00:04:13.901 ************************************ 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.901 21:05:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:17.200 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.200 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42718540 kB' 'MemAvailable: 45103868 kB' 'Buffers: 12536 kB' 'Cached: 11394772 kB' 'SwapCached: 16 kB' 'Active: 9650476 kB' 'Inactive: 2354388 kB' 'Active(anon): 9175124 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600888 kB' 'Mapped: 209664 kB' 'Shmem: 8634656 kB' 'KReclaimable: 252384 kB' 'Slab: 792208 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539824 kB' 'KernelStack: 21936 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10592100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.200 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42719496 kB' 'MemAvailable: 45104824 kB' 'Buffers: 12536 kB' 'Cached: 11394772 kB' 'SwapCached: 16 kB' 'Active: 9650192 kB' 'Inactive: 2354388 kB' 'Active(anon): 9174840 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600592 kB' 'Mapped: 209608 kB' 'Shmem: 8634656 kB' 'KReclaimable: 252384 kB' 'Slab: 792208 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539824 kB' 'KernelStack: 21920 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10592116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213540 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.201 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.202 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42719496 kB' 'MemAvailable: 45104824 kB' 'Buffers: 12536 kB' 'Cached: 11394776 kB' 'SwapCached: 16 kB' 'Active: 9649872 kB' 'Inactive: 2354388 kB' 'Active(anon): 9174520 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600268 kB' 'Mapped: 209608 kB' 'Shmem: 8634660 kB' 'KReclaimable: 252384 kB' 'Slab: 792208 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539824 kB' 'KernelStack: 21920 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10592140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.203 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.205 nr_hugepages=1024 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.205 resv_hugepages=0 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.205 surplus_hugepages=0 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.205 anon_hugepages=0 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42720656 kB' 'MemAvailable: 45105984 kB' 'Buffers: 12536 kB' 'Cached: 11394832 kB' 'SwapCached: 16 kB' 'Active: 9650216 kB' 'Inactive: 2354388 kB' 'Active(anon): 9174864 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600564 kB' 'Mapped: 209608 kB' 'Shmem: 8634716 kB' 'KReclaimable: 252384 kB' 'Slab: 792208 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539824 kB' 'KernelStack: 21920 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10592160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26753656 kB' 'MemUsed: 5838428 kB' 'SwapCached: 16 kB' 'Active: 3118356 kB' 'Inactive: 180704 kB' 'Active(anon): 2901736 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3049192 kB' 'Mapped: 137880 kB' 'AnonPages: 253096 kB' 'Shmem: 2651868 kB' 'KernelStack: 11896 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131352 kB' 'Slab: 378936 kB' 'SReclaimable: 131352 kB' 'SUnreclaim: 247584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15967504 kB' 'MemUsed: 11735644 kB' 'SwapCached: 0 kB' 'Active: 6531808 kB' 'Inactive: 2173684 kB' 'Active(anon): 6273076 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358216 kB' 'Mapped: 71728 kB' 'AnonPages: 347416 kB' 'Shmem: 5982872 kB' 'KernelStack: 10024 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121032 kB' 'Slab: 413272 kB' 'SReclaimable: 121032 kB' 'SUnreclaim: 292240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.208 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.210 node0=512 expecting 512 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:17.210 node1=512 expecting 512 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.210 00:04:17.210 real 0m3.461s 00:04:17.210 user 0m1.327s 00:04:17.210 sys 0m2.196s 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.210 21:05:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.210 ************************************ 00:04:17.210 END TEST even_2G_alloc 00:04:17.210 ************************************ 00:04:17.210 21:05:14 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:17.210 21:05:14 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.210 21:05:14 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.210 21:05:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.470 ************************************ 00:04:17.470 START TEST odd_alloc 00:04:17.470 ************************************ 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.470 21:05:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:20.769 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.769 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42743976 kB' 'MemAvailable: 45129304 kB' 'Buffers: 12536 kB' 'Cached: 11394936 kB' 'SwapCached: 16 kB' 'Active: 9651952 kB' 'Inactive: 2354388 kB' 'Active(anon): 9176600 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602212 kB' 'Mapped: 209664 kB' 'Shmem: 8634820 kB' 'KReclaimable: 252384 kB' 'Slab: 791740 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539356 kB' 'KernelStack: 21936 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10592940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213460 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.769 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.770 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42744292 kB' 'MemAvailable: 45129620 kB' 'Buffers: 12536 kB' 'Cached: 11394936 kB' 'SwapCached: 16 kB' 'Active: 9651716 kB' 'Inactive: 2354388 kB' 'Active(anon): 9176364 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602012 kB' 'Mapped: 209640 kB' 'Shmem: 8634820 kB' 'KReclaimable: 252384 kB' 'Slab: 791724 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539340 kB' 'KernelStack: 21920 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10592956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213428 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.771 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.772 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42745120 kB' 'MemAvailable: 45130448 kB' 'Buffers: 12536 kB' 'Cached: 11394956 kB' 'SwapCached: 16 kB' 'Active: 9650988 kB' 'Inactive: 2354388 kB' 'Active(anon): 9175636 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 601184 kB' 'Mapped: 209640 kB' 'Shmem: 8634840 kB' 'KReclaimable: 252384 kB' 'Slab: 791772 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539388 kB' 'KernelStack: 21920 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10592976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213428 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.773 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:20.774 nr_hugepages=1025 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.774 resv_hugepages=0 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.774 surplus_hugepages=0 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.774 anon_hugepages=0 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.774 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42745704 kB' 'MemAvailable: 45131032 kB' 'Buffers: 12536 kB' 'Cached: 11394976 kB' 'SwapCached: 16 kB' 'Active: 9651428 kB' 'Inactive: 2354388 kB' 'Active(anon): 9176076 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 601620 kB' 'Mapped: 209640 kB' 'Shmem: 8634860 kB' 'KReclaimable: 252384 kB' 'Slab: 791772 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 539388 kB' 'KernelStack: 21936 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10594124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213428 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.775 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26762504 kB' 'MemUsed: 5829580 kB' 'SwapCached: 16 kB' 'Active: 3120316 kB' 'Inactive: 180704 kB' 'Active(anon): 2903696 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3049272 kB' 'Mapped: 137900 kB' 'AnonPages: 254896 kB' 'Shmem: 2651948 kB' 'KernelStack: 11896 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131352 kB' 'Slab: 378640 kB' 'SReclaimable: 131352 kB' 'SUnreclaim: 247288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.776 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.777 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15988040 kB' 'MemUsed: 11715108 kB' 'SwapCached: 0 kB' 'Active: 6531536 kB' 'Inactive: 2173684 kB' 'Active(anon): 6272804 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358276 kB' 'Mapped: 71752 kB' 'AnonPages: 347092 kB' 'Shmem: 5982932 kB' 'KernelStack: 10008 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121032 kB' 'Slab: 413132 kB' 'SReclaimable: 121032 kB' 'SUnreclaim: 292100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.778 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:20.779 node0=512 expecting 513 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:20.779 node1=513 expecting 512 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:20.779 00:04:20.779 real 0m3.161s 00:04:20.779 user 0m1.115s 00:04:20.779 sys 0m2.061s 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:20.779 21:05:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.779 ************************************ 00:04:20.779 END TEST odd_alloc 00:04:20.779 ************************************ 00:04:20.779 21:05:17 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:20.779 21:05:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:20.779 21:05:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.779 21:05:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.779 ************************************ 00:04:20.779 START TEST custom_alloc 00:04:20.779 ************************************ 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.779 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.780 21:05:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:23.321 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.321 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41721820 kB' 'MemAvailable: 44107148 kB' 'Buffers: 12536 kB' 'Cached: 11395088 kB' 'SwapCached: 16 kB' 'Active: 9652448 kB' 'Inactive: 2354388 kB' 'Active(anon): 9177096 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602696 kB' 'Mapped: 209692 kB' 'Shmem: 8634972 kB' 'KReclaimable: 252384 kB' 'Slab: 792688 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540304 kB' 'KernelStack: 21952 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10593480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.321 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.322 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41722720 kB' 'MemAvailable: 44108048 kB' 'Buffers: 12536 kB' 'Cached: 11395088 kB' 'SwapCached: 16 kB' 'Active: 9652148 kB' 'Inactive: 2354388 kB' 'Active(anon): 9176796 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602408 kB' 'Mapped: 209636 kB' 'Shmem: 8634972 kB' 'KReclaimable: 252384 kB' 'Slab: 792712 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540328 kB' 'KernelStack: 21952 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10593496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.323 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41726968 kB' 'MemAvailable: 44112296 kB' 'Buffers: 12536 kB' 'Cached: 11395108 kB' 'SwapCached: 16 kB' 'Active: 9652464 kB' 'Inactive: 2354388 kB' 'Active(anon): 9177112 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602760 kB' 'Mapped: 209636 kB' 'Shmem: 8634992 kB' 'KReclaimable: 252384 kB' 'Slab: 792712 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540328 kB' 'KernelStack: 22000 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10593516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213540 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.324 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.325 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:23.326 nr_hugepages=1536 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.326 resv_hugepages=0 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.326 surplus_hugepages=0 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.326 anon_hugepages=0 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41726876 kB' 'MemAvailable: 44112204 kB' 'Buffers: 12536 kB' 'Cached: 11395124 kB' 'SwapCached: 16 kB' 'Active: 9652100 kB' 'Inactive: 2354388 kB' 'Active(anon): 9176748 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602392 kB' 'Mapped: 209636 kB' 'Shmem: 8635008 kB' 'KReclaimable: 252384 kB' 'Slab: 792688 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540304 kB' 'KernelStack: 21936 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10593540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213524 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.326 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.327 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.589 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26781080 kB' 'MemUsed: 5811004 kB' 'SwapCached: 16 kB' 'Active: 3120672 kB' 'Inactive: 180704 kB' 'Active(anon): 2904052 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3049408 kB' 'Mapped: 137908 kB' 'AnonPages: 255220 kB' 'Shmem: 2652084 kB' 'KernelStack: 11928 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131352 kB' 'Slab: 378640 kB' 'SReclaimable: 131352 kB' 'SUnreclaim: 247288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.590 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.591 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 14948548 kB' 'MemUsed: 12754600 kB' 'SwapCached: 0 kB' 'Active: 6531536 kB' 'Inactive: 2173684 kB' 'Active(anon): 6272804 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358292 kB' 'Mapped: 71728 kB' 'AnonPages: 347192 kB' 'Shmem: 5982948 kB' 'KernelStack: 10024 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121032 kB' 'Slab: 414048 kB' 'SReclaimable: 121032 kB' 'SUnreclaim: 293016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.592 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.593 node0=512 expecting 512 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:23.593 node1=1024 expecting 1024 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:23.593 00:04:23.593 real 0m2.940s 00:04:23.593 user 0m1.017s 00:04:23.593 sys 0m1.903s 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.593 21:05:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.593 ************************************ 00:04:23.593 END TEST custom_alloc 00:04:23.593 ************************************ 00:04:23.593 21:05:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:23.593 21:05:20 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.593 21:05:20 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.593 21:05:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.593 ************************************ 00:04:23.593 START TEST no_shrink_alloc 00:04:23.593 ************************************ 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.593 21:05:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:26.131 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42771676 kB' 'MemAvailable: 45157004 kB' 'Buffers: 12536 kB' 'Cached: 11395244 kB' 'SwapCached: 16 kB' 'Active: 9654524 kB' 'Inactive: 2354388 kB' 'Active(anon): 9179172 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603844 kB' 'Mapped: 209780 kB' 'Shmem: 8635128 kB' 'KReclaimable: 252384 kB' 'Slab: 793044 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540660 kB' 'KernelStack: 21968 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10594312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.396 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.397 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42775256 kB' 'MemAvailable: 45160584 kB' 'Buffers: 12536 kB' 'Cached: 11395248 kB' 'SwapCached: 16 kB' 'Active: 9653396 kB' 'Inactive: 2354388 kB' 'Active(anon): 9178044 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603212 kB' 'Mapped: 209644 kB' 'Shmem: 8635132 kB' 'KReclaimable: 252384 kB' 'Slab: 792988 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540604 kB' 'KernelStack: 21952 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10594332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213604 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.398 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.399 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42776384 kB' 'MemAvailable: 45161712 kB' 'Buffers: 12536 kB' 'Cached: 11395264 kB' 'SwapCached: 16 kB' 'Active: 9653132 kB' 'Inactive: 2354388 kB' 'Active(anon): 9177780 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602928 kB' 'Mapped: 209644 kB' 'Shmem: 8635148 kB' 'KReclaimable: 252384 kB' 'Slab: 792988 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540604 kB' 'KernelStack: 21936 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10594352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213604 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.403 nr_hugepages=1024 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.403 resv_hugepages=0 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.403 surplus_hugepages=0 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.403 anon_hugepages=0 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42776636 kB' 'MemAvailable: 45161964 kB' 'Buffers: 12536 kB' 'Cached: 11395284 kB' 'SwapCached: 16 kB' 'Active: 9653464 kB' 'Inactive: 2354388 kB' 'Active(anon): 9178112 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603224 kB' 'Mapped: 209644 kB' 'Shmem: 8635168 kB' 'KReclaimable: 252384 kB' 'Slab: 792988 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540604 kB' 'KernelStack: 21936 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10594376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213604 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.404 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25758252 kB' 'MemUsed: 6833832 kB' 'SwapCached: 16 kB' 'Active: 3119728 kB' 'Inactive: 180704 kB' 'Active(anon): 2903108 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3049504 kB' 'Mapped: 137916 kB' 'AnonPages: 254056 kB' 'Shmem: 2652180 kB' 'KernelStack: 11896 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131352 kB' 'Slab: 378848 kB' 'SReclaimable: 131352 kB' 'SUnreclaim: 247496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.405 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.406 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.407 node0=1024 expecting 1024 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.407 21:05:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:29.702 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.702 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.702 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42823432 kB' 'MemAvailable: 45208760 kB' 'Buffers: 12536 kB' 'Cached: 11395392 kB' 'SwapCached: 16 kB' 'Active: 9654632 kB' 'Inactive: 2354388 kB' 'Active(anon): 9179280 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604504 kB' 'Mapped: 209768 kB' 'Shmem: 8635276 kB' 'KReclaimable: 252384 kB' 'Slab: 792680 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540296 kB' 'KernelStack: 21984 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10595132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213588 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.703 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42824588 kB' 'MemAvailable: 45209916 kB' 'Buffers: 12536 kB' 'Cached: 11395396 kB' 'SwapCached: 16 kB' 'Active: 9654176 kB' 'Inactive: 2354388 kB' 'Active(anon): 9178824 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603968 kB' 'Mapped: 209712 kB' 'Shmem: 8635280 kB' 'KReclaimable: 252384 kB' 'Slab: 792680 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540296 kB' 'KernelStack: 21952 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10595152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.704 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.705 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42825860 kB' 'MemAvailable: 45211188 kB' 'Buffers: 12536 kB' 'Cached: 11395412 kB' 'SwapCached: 16 kB' 'Active: 9653864 kB' 'Inactive: 2354388 kB' 'Active(anon): 9178512 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603532 kB' 'Mapped: 209652 kB' 'Shmem: 8635296 kB' 'KReclaimable: 252384 kB' 'Slab: 792716 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540332 kB' 'KernelStack: 21952 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10595172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.706 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.707 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.708 nr_hugepages=1024 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.708 resv_hugepages=0 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.708 surplus_hugepages=0 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.708 anon_hugepages=0 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.708 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42826428 kB' 'MemAvailable: 45211756 kB' 'Buffers: 12536 kB' 'Cached: 11395436 kB' 'SwapCached: 16 kB' 'Active: 9653856 kB' 'Inactive: 2354388 kB' 'Active(anon): 9178504 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603528 kB' 'Mapped: 209652 kB' 'Shmem: 8635320 kB' 'KReclaimable: 252384 kB' 'Slab: 792716 kB' 'SReclaimable: 252384 kB' 'SUnreclaim: 540332 kB' 'KernelStack: 21952 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10595196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.709 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25792460 kB' 'MemUsed: 6799624 kB' 'SwapCached: 16 kB' 'Active: 3119968 kB' 'Inactive: 180704 kB' 'Active(anon): 2903348 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3049628 kB' 'Mapped: 137924 kB' 'AnonPages: 254264 kB' 'Shmem: 2652304 kB' 'KernelStack: 11944 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131352 kB' 'Slab: 378612 kB' 'SReclaimable: 131352 kB' 'SUnreclaim: 247260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.710 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.711 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.712 node0=1024 expecting 1024 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.712 00:04:29.712 real 0m5.916s 00:04:29.712 user 0m1.968s 00:04:29.712 sys 0m3.931s 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.712 21:05:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.712 ************************************ 00:04:29.712 END TEST no_shrink_alloc 00:04:29.712 ************************************ 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.712 21:05:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.712 00:04:29.712 real 0m24.945s 00:04:29.712 user 0m8.455s 00:04:29.712 sys 0m15.225s 00:04:29.712 21:05:26 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.712 21:05:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.712 ************************************ 00:04:29.712 END TEST hugepages 00:04:29.712 ************************************ 00:04:29.712 21:05:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:29.712 21:05:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.712 21:05:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.712 21:05:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.712 ************************************ 00:04:29.712 START TEST driver 00:04:29.712 ************************************ 00:04:29.712 21:05:26 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:29.712 * Looking for test storage... 00:04:29.712 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:29.712 21:05:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:29.712 21:05:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.712 21:05:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.075 21:05:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:35.075 21:05:31 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.075 21:05:31 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.075 21:05:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:35.075 ************************************ 00:04:35.075 START TEST guess_driver 00:04:35.075 ************************************ 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:35.075 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:35.076 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:35.076 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:35.076 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:35.076 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:35.076 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:35.076 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:35.076 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:35.076 Looking for driver=vfio-pci 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.076 21:05:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.613 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.614 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.872 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.872 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.872 21:05:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.254 21:05:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.254 21:05:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.254 21:05:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.254 21:05:36 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:39.254 21:05:36 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:39.254 21:05:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.254 21:05:36 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.527 00:04:44.527 real 0m9.322s 00:04:44.527 user 0m2.374s 00:04:44.527 sys 0m4.716s 00:04:44.527 21:05:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.527 21:05:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:44.527 ************************************ 00:04:44.527 END TEST guess_driver 00:04:44.527 ************************************ 00:04:44.527 00:04:44.527 real 0m14.183s 00:04:44.527 user 0m3.794s 00:04:44.527 sys 0m7.394s 00:04:44.527 21:05:40 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.527 21:05:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:44.527 ************************************ 00:04:44.527 END TEST driver 00:04:44.527 ************************************ 00:04:44.527 21:05:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:44.527 21:05:40 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.527 21:05:40 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.527 21:05:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.527 ************************************ 00:04:44.527 START TEST devices 00:04:44.527 ************************************ 00:04:44.527 21:05:40 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:44.527 * Looking for test storage... 00:04:44.527 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:44.527 21:05:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:44.527 21:05:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:44.527 21:05:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.527 21:05:40 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:47.810 21:05:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:47.810 No valid GPT data, bailing 00:04:47.810 21:05:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:47.810 21:05:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:47.810 21:05:44 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:47.810 21:05:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.810 21:05:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.810 ************************************ 00:04:47.810 START TEST nvme_mount 00:04:47.810 ************************************ 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.811 21:05:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:48.750 Creating new GPT entries in memory. 00:04:48.750 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.750 other utilities. 00:04:48.750 21:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.750 21:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.750 21:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.750 21:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.750 21:05:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:50.131 Creating new GPT entries in memory. 00:04:50.131 The operation has completed successfully. 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3981354 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.131 21:05:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.670 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.929 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.929 21:05:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.188 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.188 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.188 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.188 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.188 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:53.188 21:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:53.188 21:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.188 21:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.188 21:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.188 21:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.448 21:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.766 21:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.054 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.054 00:05:00.054 real 0m12.274s 00:05:00.054 user 0m3.551s 00:05:00.054 sys 0m6.598s 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.054 21:05:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:00.054 ************************************ 00:05:00.054 END TEST nvme_mount 00:05:00.054 ************************************ 00:05:00.054 21:05:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:00.054 21:05:56 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.054 21:05:56 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.054 21:05:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:00.054 ************************************ 00:05:00.054 START TEST dm_mount 00:05:00.054 ************************************ 00:05:00.054 21:05:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:00.054 21:05:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:00.054 21:05:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:00.054 21:05:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:00.054 21:05:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.313 21:05:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:01.248 Creating new GPT entries in memory. 00:05:01.248 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.248 other utilities. 00:05:01.248 21:05:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.248 21:05:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.248 21:05:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.248 21:05:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.248 21:05:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.186 Creating new GPT entries in memory. 00:05:02.186 The operation has completed successfully. 00:05:02.186 21:05:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.186 21:05:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.186 21:05:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.186 21:05:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.186 21:05:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:03.128 The operation has completed successfully. 00:05:03.128 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.128 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.128 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3985781 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.387 21:06:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.678 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.679 21:06:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.216 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.216 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:09.216 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:09.216 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:09.476 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:09.476 00:05:09.476 real 0m9.321s 00:05:09.476 user 0m2.176s 00:05:09.476 sys 0m4.158s 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.476 21:06:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:09.476 ************************************ 00:05:09.476 END TEST dm_mount 00:05:09.476 ************************************ 00:05:09.476 21:06:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:09.476 21:06:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:09.476 21:06:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.476 21:06:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.476 21:06:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:09.476 21:06:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.476 21:06:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.736 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:09.736 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:09.736 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.736 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.736 21:06:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:09.736 21:06:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.736 21:06:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.736 21:06:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.736 21:06:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.736 21:06:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.736 21:06:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:09.736 00:05:09.736 real 0m25.955s 00:05:09.736 user 0m7.228s 00:05:09.736 sys 0m13.523s 00:05:09.736 21:06:06 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.736 21:06:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.736 ************************************ 00:05:09.736 END TEST devices 00:05:09.736 ************************************ 00:05:09.995 00:05:09.995 real 1m28.661s 00:05:09.995 user 0m26.896s 00:05:09.995 sys 0m50.339s 00:05:09.995 21:06:06 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.995 21:06:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.995 ************************************ 00:05:09.995 END TEST setup.sh 00:05:09.995 ************************************ 00:05:09.995 21:06:06 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:13.358 Hugepages 00:05:13.358 node hugesize free / total 00:05:13.358 node0 1048576kB 0 / 0 00:05:13.358 node0 2048kB 2048 / 2048 00:05:13.358 node1 1048576kB 0 / 0 00:05:13.358 node1 2048kB 0 / 0 00:05:13.358 00:05:13.358 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.358 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:13.358 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:13.358 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:13.358 21:06:09 -- spdk/autotest.sh@130 -- # uname -s 00:05:13.358 21:06:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:13.358 21:06:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:13.358 21:06:09 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:16.662 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.662 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:18.040 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.040 21:06:14 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:18.975 21:06:15 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:18.975 21:06:15 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:18.975 21:06:15 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:18.975 21:06:15 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:18.975 21:06:15 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:18.975 21:06:15 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:18.975 21:06:15 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.975 21:06:15 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.975 21:06:15 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:19.234 21:06:15 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:19.234 21:06:15 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:19.234 21:06:15 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.523 Waiting for block devices as requested 00:05:22.523 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:22.523 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:22.523 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:22.877 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:22.877 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:22.877 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:22.877 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:22.877 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:23.135 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:23.135 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:23.135 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:23.394 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:23.394 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:23.394 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:23.654 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:23.654 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:23.654 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:23.913 21:06:20 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:23.913 21:06:20 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:05:23.913 21:06:20 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:23.913 21:06:20 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:23.913 21:06:20 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:23.913 21:06:20 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:23.913 21:06:20 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:05:23.913 21:06:20 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:23.913 21:06:20 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:23.913 21:06:20 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:23.913 21:06:20 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:23.913 21:06:20 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:23.913 21:06:20 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:23.913 21:06:20 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:23.913 21:06:20 -- common/autotest_common.sh@1553 -- # continue 00:05:23.913 21:06:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:23.913 21:06:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.913 21:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.913 21:06:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:23.913 21:06:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.913 21:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.913 21:06:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:27.206 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.206 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.587 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:28.587 21:06:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:28.587 21:06:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.587 21:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:28.587 21:06:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:28.587 21:06:25 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:28.587 21:06:25 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.587 21:06:25 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:28.587 21:06:25 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:28.587 21:06:25 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:28.587 21:06:25 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:28.587 21:06:25 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:28.587 21:06:25 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.587 21:06:25 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.587 21:06:25 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:28.587 21:06:25 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:28.587 21:06:25 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:28.587 21:06:25 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:28.587 21:06:25 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:28.587 21:06:25 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:28.587 21:06:25 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:28.587 21:06:25 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:28.587 21:06:25 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:05:28.587 21:06:25 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:05:28.587 21:06:25 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3995877 00:05:28.587 21:06:25 -- common/autotest_common.sh@1594 -- # waitforlisten 3995877 00:05:28.587 21:06:25 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.587 21:06:25 -- common/autotest_common.sh@827 -- # '[' -z 3995877 ']' 00:05:28.587 21:06:25 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.587 21:06:25 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.587 21:06:25 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.587 21:06:25 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.587 21:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:28.587 [2024-07-14 21:06:25.400955] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:28.587 [2024-07-14 21:06:25.401017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995877 ] 00:05:28.587 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.587 [2024-07-14 21:06:25.467001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.846 [2024-07-14 21:06:25.505588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.846 21:06:25 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.846 21:06:25 -- common/autotest_common.sh@860 -- # return 0 00:05:28.846 21:06:25 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:28.846 21:06:25 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:28.846 21:06:25 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:32.165 nvme0n1 00:05:32.165 21:06:28 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:32.165 [2024-07-14 21:06:28.836219] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:32.165 request: 00:05:32.165 { 00:05:32.165 "nvme_ctrlr_name": "nvme0", 00:05:32.165 "password": "test", 00:05:32.165 "method": "bdev_nvme_opal_revert", 00:05:32.165 "req_id": 1 00:05:32.165 } 00:05:32.165 Got JSON-RPC error response 00:05:32.165 response: 00:05:32.165 { 00:05:32.165 "code": -32602, 00:05:32.165 "message": "Invalid parameters" 00:05:32.165 } 00:05:32.165 21:06:28 -- common/autotest_common.sh@1600 -- # true 00:05:32.165 21:06:28 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:32.165 21:06:28 -- common/autotest_common.sh@1604 -- # killprocess 3995877 00:05:32.165 21:06:28 -- common/autotest_common.sh@946 -- # '[' -z 3995877 ']' 00:05:32.165 21:06:28 -- common/autotest_common.sh@950 -- # kill -0 3995877 00:05:32.165 21:06:28 -- common/autotest_common.sh@951 -- # uname 00:05:32.165 21:06:28 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:32.165 21:06:28 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3995877 00:05:32.165 21:06:28 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:32.165 21:06:28 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:32.165 21:06:28 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3995877' 00:05:32.165 killing process with pid 3995877 00:05:32.165 21:06:28 -- common/autotest_common.sh@965 -- # kill 3995877 00:05:32.165 21:06:28 -- common/autotest_common.sh@970 -- # wait 3995877 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.165 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:32.166 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.705 21:06:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:34.705 21:06:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:34.705 21:06:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.705 21:06:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.705 21:06:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:34.705 21:06:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:34.705 21:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.705 21:06:31 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:34.705 21:06:31 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:34.705 21:06:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.705 21:06:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.705 21:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.705 ************************************ 00:05:34.705 START TEST env 00:05:34.705 ************************************ 00:05:34.705 21:06:31 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:34.705 * Looking for test storage... 00:05:34.705 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:34.705 21:06:31 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.705 21:06:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.705 21:06:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.705 21:06:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.705 ************************************ 00:05:34.705 START TEST env_memory 00:05:34.705 ************************************ 00:05:34.705 21:06:31 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.705 00:05:34.705 00:05:34.705 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.705 http://cunit.sourceforge.net/ 00:05:34.705 00:05:34.705 00:05:34.705 Suite: memory 00:05:34.705 Test: alloc and free memory map ...[2024-07-14 21:06:31.193486] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.705 passed 00:05:34.705 Test: mem map translation ...[2024-07-14 21:06:31.207197] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.705 [2024-07-14 21:06:31.207215] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.705 [2024-07-14 21:06:31.207245] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.705 [2024-07-14 21:06:31.207253] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.705 passed 00:05:34.705 Test: mem map registration ...[2024-07-14 21:06:31.228529] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:34.705 [2024-07-14 21:06:31.228545] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:34.705 passed 00:05:34.705 Test: mem map adjacent registrations ...passed 00:05:34.705 00:05:34.705 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.705 suites 1 1 n/a 0 0 00:05:34.705 tests 4 4 4 0 0 00:05:34.705 asserts 152 152 152 0 n/a 00:05:34.705 00:05:34.705 Elapsed time = 0.088 seconds 00:05:34.705 00:05:34.705 real 0m0.101s 00:05:34.705 user 0m0.085s 00:05:34.705 sys 0m0.015s 00:05:34.705 21:06:31 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.705 21:06:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.705 ************************************ 00:05:34.705 END TEST env_memory 00:05:34.705 ************************************ 00:05:34.705 21:06:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.705 21:06:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.705 21:06:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.705 21:06:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.705 ************************************ 00:05:34.705 START TEST env_vtophys 00:05:34.705 ************************************ 00:05:34.705 21:06:31 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.705 EAL: lib.eal log level changed from notice to debug 00:05:34.705 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.705 EAL: Detected lcore 1 as core 1 on socket 0 00:05:34.705 EAL: Detected lcore 2 as core 2 on socket 0 00:05:34.705 EAL: Detected lcore 3 as core 3 on socket 0 00:05:34.705 EAL: Detected lcore 4 as core 4 on socket 0 00:05:34.705 EAL: Detected lcore 5 as core 5 on socket 0 00:05:34.705 EAL: Detected lcore 6 as core 6 on socket 0 00:05:34.705 EAL: Detected lcore 7 as core 8 on socket 0 00:05:34.705 EAL: Detected lcore 8 as core 9 on socket 0 00:05:34.705 EAL: Detected lcore 9 as core 10 on socket 0 00:05:34.705 EAL: Detected lcore 10 as core 11 on socket 0 00:05:34.705 EAL: Detected lcore 11 as core 12 on socket 0 00:05:34.705 EAL: Detected lcore 12 as core 13 on socket 0 00:05:34.705 EAL: Detected lcore 13 as core 14 on socket 0 00:05:34.705 EAL: Detected lcore 14 as core 16 on socket 0 00:05:34.705 EAL: Detected lcore 15 as core 17 on socket 0 00:05:34.705 EAL: Detected lcore 16 as core 18 on socket 0 00:05:34.705 EAL: Detected lcore 17 as core 19 on socket 0 00:05:34.705 EAL: Detected lcore 18 as core 20 on socket 0 00:05:34.705 EAL: Detected lcore 19 as core 21 on socket 0 00:05:34.705 EAL: Detected lcore 20 as core 22 on socket 0 00:05:34.705 EAL: Detected lcore 21 as core 24 on socket 0 00:05:34.705 EAL: Detected lcore 22 as core 25 on socket 0 00:05:34.706 EAL: Detected lcore 23 as core 26 on socket 0 00:05:34.706 EAL: Detected lcore 24 as core 27 on socket 0 00:05:34.706 EAL: Detected lcore 25 as core 28 on socket 0 00:05:34.706 EAL: Detected lcore 26 as core 29 on socket 0 00:05:34.706 EAL: Detected lcore 27 as core 30 on socket 0 00:05:34.706 EAL: Detected lcore 28 as core 0 on socket 1 00:05:34.706 EAL: Detected lcore 29 as core 1 on socket 1 00:05:34.706 EAL: Detected lcore 30 as core 2 on socket 1 00:05:34.706 EAL: Detected lcore 31 as core 3 on socket 1 00:05:34.706 EAL: Detected lcore 32 as core 4 on socket 1 00:05:34.706 EAL: Detected lcore 33 as core 5 on socket 1 00:05:34.706 EAL: Detected lcore 34 as core 6 on socket 1 00:05:34.706 EAL: Detected lcore 35 as core 8 on socket 1 00:05:34.706 EAL: Detected lcore 36 as core 9 on socket 1 00:05:34.706 EAL: Detected lcore 37 as core 10 on socket 1 00:05:34.706 EAL: Detected lcore 38 as core 11 on socket 1 00:05:34.706 EAL: Detected lcore 39 as core 12 on socket 1 00:05:34.706 EAL: Detected lcore 40 as core 13 on socket 1 00:05:34.706 EAL: Detected lcore 41 as core 14 on socket 1 00:05:34.706 EAL: Detected lcore 42 as core 16 on socket 1 00:05:34.706 EAL: Detected lcore 43 as core 17 on socket 1 00:05:34.706 EAL: Detected lcore 44 as core 18 on socket 1 00:05:34.706 EAL: Detected lcore 45 as core 19 on socket 1 00:05:34.706 EAL: Detected lcore 46 as core 20 on socket 1 00:05:34.706 EAL: Detected lcore 47 as core 21 on socket 1 00:05:34.706 EAL: Detected lcore 48 as core 22 on socket 1 00:05:34.706 EAL: Detected lcore 49 as core 24 on socket 1 00:05:34.706 EAL: Detected lcore 50 as core 25 on socket 1 00:05:34.706 EAL: Detected lcore 51 as core 26 on socket 1 00:05:34.706 EAL: Detected lcore 52 as core 27 on socket 1 00:05:34.706 EAL: Detected lcore 53 as core 28 on socket 1 00:05:34.706 EAL: Detected lcore 54 as core 29 on socket 1 00:05:34.706 EAL: Detected lcore 55 as core 30 on socket 1 00:05:34.706 EAL: Detected lcore 56 as core 0 on socket 0 00:05:34.706 EAL: Detected lcore 57 as core 1 on socket 0 00:05:34.706 EAL: Detected lcore 58 as core 2 on socket 0 00:05:34.706 EAL: Detected lcore 59 as core 3 on socket 0 00:05:34.706 EAL: Detected lcore 60 as core 4 on socket 0 00:05:34.706 EAL: Detected lcore 61 as core 5 on socket 0 00:05:34.706 EAL: Detected lcore 62 as core 6 on socket 0 00:05:34.706 EAL: Detected lcore 63 as core 8 on socket 0 00:05:34.706 EAL: Detected lcore 64 as core 9 on socket 0 00:05:34.706 EAL: Detected lcore 65 as core 10 on socket 0 00:05:34.706 EAL: Detected lcore 66 as core 11 on socket 0 00:05:34.706 EAL: Detected lcore 67 as core 12 on socket 0 00:05:34.706 EAL: Detected lcore 68 as core 13 on socket 0 00:05:34.706 EAL: Detected lcore 69 as core 14 on socket 0 00:05:34.706 EAL: Detected lcore 70 as core 16 on socket 0 00:05:34.706 EAL: Detected lcore 71 as core 17 on socket 0 00:05:34.706 EAL: Detected lcore 72 as core 18 on socket 0 00:05:34.706 EAL: Detected lcore 73 as core 19 on socket 0 00:05:34.706 EAL: Detected lcore 74 as core 20 on socket 0 00:05:34.706 EAL: Detected lcore 75 as core 21 on socket 0 00:05:34.706 EAL: Detected lcore 76 as core 22 on socket 0 00:05:34.706 EAL: Detected lcore 77 as core 24 on socket 0 00:05:34.706 EAL: Detected lcore 78 as core 25 on socket 0 00:05:34.706 EAL: Detected lcore 79 as core 26 on socket 0 00:05:34.706 EAL: Detected lcore 80 as core 27 on socket 0 00:05:34.706 EAL: Detected lcore 81 as core 28 on socket 0 00:05:34.706 EAL: Detected lcore 82 as core 29 on socket 0 00:05:34.706 EAL: Detected lcore 83 as core 30 on socket 0 00:05:34.706 EAL: Detected lcore 84 as core 0 on socket 1 00:05:34.706 EAL: Detected lcore 85 as core 1 on socket 1 00:05:34.706 EAL: Detected lcore 86 as core 2 on socket 1 00:05:34.706 EAL: Detected lcore 87 as core 3 on socket 1 00:05:34.706 EAL: Detected lcore 88 as core 4 on socket 1 00:05:34.706 EAL: Detected lcore 89 as core 5 on socket 1 00:05:34.706 EAL: Detected lcore 90 as core 6 on socket 1 00:05:34.706 EAL: Detected lcore 91 as core 8 on socket 1 00:05:34.706 EAL: Detected lcore 92 as core 9 on socket 1 00:05:34.706 EAL: Detected lcore 93 as core 10 on socket 1 00:05:34.706 EAL: Detected lcore 94 as core 11 on socket 1 00:05:34.706 EAL: Detected lcore 95 as core 12 on socket 1 00:05:34.706 EAL: Detected lcore 96 as core 13 on socket 1 00:05:34.706 EAL: Detected lcore 97 as core 14 on socket 1 00:05:34.706 EAL: Detected lcore 98 as core 16 on socket 1 00:05:34.706 EAL: Detected lcore 99 as core 17 on socket 1 00:05:34.706 EAL: Detected lcore 100 as core 18 on socket 1 00:05:34.706 EAL: Detected lcore 101 as core 19 on socket 1 00:05:34.706 EAL: Detected lcore 102 as core 20 on socket 1 00:05:34.706 EAL: Detected lcore 103 as core 21 on socket 1 00:05:34.706 EAL: Detected lcore 104 as core 22 on socket 1 00:05:34.706 EAL: Detected lcore 105 as core 24 on socket 1 00:05:34.706 EAL: Detected lcore 106 as core 25 on socket 1 00:05:34.706 EAL: Detected lcore 107 as core 26 on socket 1 00:05:34.706 EAL: Detected lcore 108 as core 27 on socket 1 00:05:34.706 EAL: Detected lcore 109 as core 28 on socket 1 00:05:34.706 EAL: Detected lcore 110 as core 29 on socket 1 00:05:34.706 EAL: Detected lcore 111 as core 30 on socket 1 00:05:34.706 EAL: Maximum logical cores by configuration: 128 00:05:34.706 EAL: Detected CPU lcores: 112 00:05:34.706 EAL: Detected NUMA nodes: 2 00:05:34.706 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:34.706 EAL: Checking presence of .so 'librte_eal.so.23' 00:05:34.706 EAL: Checking presence of .so 'librte_eal.so' 00:05:34.706 EAL: Detected static linkage of DPDK 00:05:34.706 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.706 EAL: Bus pci wants IOVA as 'DC' 00:05:34.706 EAL: Buses did not request a specific IOVA mode. 00:05:34.706 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:34.706 EAL: Selected IOVA mode 'VA' 00:05:34.706 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.706 EAL: Probing VFIO support... 00:05:34.706 EAL: IOMMU type 1 (Type 1) is supported 00:05:34.706 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:34.706 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:34.706 EAL: VFIO support initialized 00:05:34.706 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.706 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.706 EAL: Setting up physically contiguous memory... 00:05:34.706 EAL: Setting maximum number of open files to 524288 00:05:34.706 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.706 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:34.706 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.706 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:34.706 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.706 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:34.706 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.706 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.706 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:34.706 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:34.706 EAL: Hugepages will be freed exactly as allocated. 00:05:34.706 EAL: No shared files mode enabled, IPC is disabled 00:05:34.706 EAL: No shared files mode enabled, IPC is disabled 00:05:34.706 EAL: TSC frequency is ~2500000 KHz 00:05:34.706 EAL: Main lcore 0 is ready (tid=7f263a70da00;cpuset=[0]) 00:05:34.706 EAL: Trying to obtain current memory policy. 00:05:34.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.706 EAL: Restoring previous memory policy: 0 00:05:34.706 EAL: request: mp_malloc_sync 00:05:34.706 EAL: No shared files mode enabled, IPC is disabled 00:05:34.706 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.706 EAL: No shared files mode enabled, IPC is disabled 00:05:34.706 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.706 00:05:34.707 00:05:34.707 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.707 http://cunit.sourceforge.net/ 00:05:34.707 00:05:34.707 00:05:34.707 Suite: components_suite 00:05:34.707 Test: vtophys_malloc_test ...passed 00:05:34.707 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 4MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was shrunk by 4MB 00:05:34.707 EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 6MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was shrunk by 6MB 00:05:34.707 EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 10MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was shrunk by 10MB 00:05:34.707 EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 18MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was shrunk by 18MB 00:05:34.707 EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 34MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was shrunk by 34MB 00:05:34.707 EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.707 EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 130MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was shrunk by 130MB 00:05:34.707 EAL: Trying to obtain current memory policy. 00:05:34.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.707 EAL: Restoring previous memory policy: 4 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.707 EAL: request: mp_malloc_sync 00:05:34.707 EAL: No shared files mode enabled, IPC is disabled 00:05:34.707 EAL: Heap on socket 0 was expanded by 258MB 00:05:34.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.966 EAL: request: mp_malloc_sync 00:05:34.966 EAL: No shared files mode enabled, IPC is disabled 00:05:34.966 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.966 EAL: Trying to obtain current memory policy. 00:05:34.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.966 EAL: Restoring previous memory policy: 4 00:05:34.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.966 EAL: request: mp_malloc_sync 00:05:34.966 EAL: No shared files mode enabled, IPC is disabled 00:05:34.966 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.225 EAL: request: mp_malloc_sync 00:05:35.225 EAL: No shared files mode enabled, IPC is disabled 00:05:35.226 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.226 EAL: Trying to obtain current memory policy. 00:05:35.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.226 EAL: Restoring previous memory policy: 4 00:05:35.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.226 EAL: request: mp_malloc_sync 00:05:35.226 EAL: No shared files mode enabled, IPC is disabled 00:05:35.226 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.745 EAL: request: mp_malloc_sync 00:05:35.745 EAL: No shared files mode enabled, IPC is disabled 00:05:35.745 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.745 passed 00:05:35.745 00:05:35.745 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.745 suites 1 1 n/a 0 0 00:05:35.745 tests 2 2 2 0 0 00:05:35.745 asserts 497 497 497 0 n/a 00:05:35.745 00:05:35.745 Elapsed time = 0.961 seconds 00:05:35.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.745 EAL: request: mp_malloc_sync 00:05:35.745 EAL: No shared files mode enabled, IPC is disabled 00:05:35.745 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.745 EAL: No shared files mode enabled, IPC is disabled 00:05:35.745 EAL: No shared files mode enabled, IPC is disabled 00:05:35.745 EAL: No shared files mode enabled, IPC is disabled 00:05:35.745 00:05:35.745 real 0m1.075s 00:05:35.745 user 0m0.627s 00:05:35.745 sys 0m0.425s 00:05:35.745 21:06:32 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.745 21:06:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 END TEST env_vtophys 00:05:35.745 ************************************ 00:05:35.745 21:06:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.745 21:06:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.745 21:06:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.745 21:06:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 START TEST env_pci 00:05:35.745 ************************************ 00:05:35.745 21:06:32 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.745 00:05:35.745 00:05:35.745 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.745 http://cunit.sourceforge.net/ 00:05:35.745 00:05:35.745 00:05:35.745 Suite: pci 00:05:35.745 Test: pci_hook ...[2024-07-14 21:06:32.497811] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3997132 has claimed it 00:05:35.745 EAL: Cannot find device (10000:00:01.0) 00:05:35.745 EAL: Failed to attach device on primary process 00:05:35.745 passed 00:05:35.745 00:05:35.745 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.745 suites 1 1 n/a 0 0 00:05:35.745 tests 1 1 1 0 0 00:05:35.745 asserts 25 25 25 0 n/a 00:05:35.745 00:05:35.745 Elapsed time = 0.038 seconds 00:05:35.745 00:05:35.745 real 0m0.055s 00:05:35.745 user 0m0.017s 00:05:35.745 sys 0m0.038s 00:05:35.745 21:06:32 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.745 21:06:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 END TEST env_pci 00:05:35.745 ************************************ 00:05:35.745 21:06:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.745 21:06:32 env -- env/env.sh@15 -- # uname 00:05:35.745 21:06:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.745 21:06:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.745 21:06:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.745 21:06:32 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:35.745 21:06:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.745 21:06:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 START TEST env_dpdk_post_init 00:05:35.745 ************************************ 00:05:35.745 21:06:32 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.745 EAL: Detected CPU lcores: 112 00:05:35.745 EAL: Detected NUMA nodes: 2 00:05:35.745 EAL: Detected static linkage of DPDK 00:05:35.745 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.005 EAL: Selected IOVA mode 'VA' 00:05:36.005 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.005 EAL: VFIO support initialized 00:05:36.005 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.005 EAL: Using IOMMU type 1 (Type 1) 00:05:36.941 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:40.228 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:40.228 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:40.486 Starting DPDK initialization... 00:05:40.486 Starting SPDK post initialization... 00:05:40.486 SPDK NVMe probe 00:05:40.486 Attaching to 0000:d8:00.0 00:05:40.486 Attached to 0000:d8:00.0 00:05:40.486 Cleaning up... 00:05:40.486 00:05:40.486 real 0m4.696s 00:05:40.486 user 0m3.501s 00:05:40.486 sys 0m0.440s 00:05:40.486 21:06:37 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.486 21:06:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.486 ************************************ 00:05:40.486 END TEST env_dpdk_post_init 00:05:40.486 ************************************ 00:05:40.486 21:06:37 env -- env/env.sh@26 -- # uname 00:05:40.486 21:06:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.486 21:06:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.486 21:06:37 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.486 21:06:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.486 21:06:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.745 ************************************ 00:05:40.745 START TEST env_mem_callbacks 00:05:40.745 ************************************ 00:05:40.745 21:06:37 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.745 EAL: Detected CPU lcores: 112 00:05:40.745 EAL: Detected NUMA nodes: 2 00:05:40.745 EAL: Detected static linkage of DPDK 00:05:40.745 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.745 EAL: Selected IOVA mode 'VA' 00:05:40.745 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.745 EAL: VFIO support initialized 00:05:40.745 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.745 00:05:40.745 00:05:40.745 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.746 http://cunit.sourceforge.net/ 00:05:40.746 00:05:40.746 00:05:40.746 Suite: memory 00:05:40.746 Test: test ... 00:05:40.746 register 0x200000200000 2097152 00:05:40.746 malloc 3145728 00:05:40.746 register 0x200000400000 4194304 00:05:40.746 buf 0x200000500000 len 3145728 PASSED 00:05:40.746 malloc 64 00:05:40.746 buf 0x2000004fff40 len 64 PASSED 00:05:40.746 malloc 4194304 00:05:40.746 register 0x200000800000 6291456 00:05:40.746 buf 0x200000a00000 len 4194304 PASSED 00:05:40.746 free 0x200000500000 3145728 00:05:40.746 free 0x2000004fff40 64 00:05:40.746 unregister 0x200000400000 4194304 PASSED 00:05:40.746 free 0x200000a00000 4194304 00:05:40.746 unregister 0x200000800000 6291456 PASSED 00:05:40.746 malloc 8388608 00:05:40.746 register 0x200000400000 10485760 00:05:40.746 buf 0x200000600000 len 8388608 PASSED 00:05:40.746 free 0x200000600000 8388608 00:05:40.746 unregister 0x200000400000 10485760 PASSED 00:05:40.746 passed 00:05:40.746 00:05:40.746 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.746 suites 1 1 n/a 0 0 00:05:40.746 tests 1 1 1 0 0 00:05:40.746 asserts 15 15 15 0 n/a 00:05:40.746 00:05:40.746 Elapsed time = 0.005 seconds 00:05:40.746 00:05:40.746 real 0m0.060s 00:05:40.746 user 0m0.012s 00:05:40.746 sys 0m0.048s 00:05:40.746 21:06:37 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.746 21:06:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:40.746 ************************************ 00:05:40.746 END TEST env_mem_callbacks 00:05:40.746 ************************************ 00:05:40.746 00:05:40.746 real 0m6.451s 00:05:40.746 user 0m4.399s 00:05:40.746 sys 0m1.298s 00:05:40.746 21:06:37 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.746 21:06:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.746 ************************************ 00:05:40.746 END TEST env 00:05:40.746 ************************************ 00:05:40.746 21:06:37 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.746 21:06:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.746 21:06:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.746 21:06:37 -- common/autotest_common.sh@10 -- # set +x 00:05:40.746 ************************************ 00:05:40.746 START TEST rpc 00:05:40.746 ************************************ 00:05:40.746 21:06:37 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.005 * Looking for test storage... 00:05:41.005 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:41.005 21:06:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3998291 00:05:41.005 21:06:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.005 21:06:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.005 21:06:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3998291 00:05:41.005 21:06:37 rpc -- common/autotest_common.sh@827 -- # '[' -z 3998291 ']' 00:05:41.005 21:06:37 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.005 21:06:37 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.005 21:06:37 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.005 21:06:37 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.005 21:06:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.005 [2024-07-14 21:06:37.704494] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:41.005 [2024-07-14 21:06:37.704584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998291 ] 00:05:41.005 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.005 [2024-07-14 21:06:37.773645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.005 [2024-07-14 21:06:37.812567] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.005 [2024-07-14 21:06:37.812607] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3998291' to capture a snapshot of events at runtime. 00:05:41.005 [2024-07-14 21:06:37.812617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.005 [2024-07-14 21:06:37.812625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.005 [2024-07-14 21:06:37.812633] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3998291 for offline analysis/debug. 00:05:41.005 [2024-07-14 21:06:37.812654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.264 21:06:37 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.264 21:06:37 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:41.265 21:06:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:41.265 21:06:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:41.265 21:06:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.265 21:06:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.265 21:06:37 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.265 21:06:37 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.265 21:06:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.265 ************************************ 00:05:41.265 START TEST rpc_integrity 00:05:41.265 ************************************ 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.265 { 00:05:41.265 "name": "Malloc0", 00:05:41.265 "aliases": [ 00:05:41.265 "54d35a8a-ba45-4eca-9470-08168c52eabf" 00:05:41.265 ], 00:05:41.265 "product_name": "Malloc disk", 00:05:41.265 "block_size": 512, 00:05:41.265 "num_blocks": 16384, 00:05:41.265 "uuid": "54d35a8a-ba45-4eca-9470-08168c52eabf", 00:05:41.265 "assigned_rate_limits": { 00:05:41.265 "rw_ios_per_sec": 0, 00:05:41.265 "rw_mbytes_per_sec": 0, 00:05:41.265 "r_mbytes_per_sec": 0, 00:05:41.265 "w_mbytes_per_sec": 0 00:05:41.265 }, 00:05:41.265 "claimed": false, 00:05:41.265 "zoned": false, 00:05:41.265 "supported_io_types": { 00:05:41.265 "read": true, 00:05:41.265 "write": true, 00:05:41.265 "unmap": true, 00:05:41.265 "write_zeroes": true, 00:05:41.265 "flush": true, 00:05:41.265 "reset": true, 00:05:41.265 "compare": false, 00:05:41.265 "compare_and_write": false, 00:05:41.265 "abort": true, 00:05:41.265 "nvme_admin": false, 00:05:41.265 "nvme_io": false 00:05:41.265 }, 00:05:41.265 "memory_domains": [ 00:05:41.265 { 00:05:41.265 "dma_device_id": "system", 00:05:41.265 "dma_device_type": 1 00:05:41.265 }, 00:05:41.265 { 00:05:41.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.265 "dma_device_type": 2 00:05:41.265 } 00:05:41.265 ], 00:05:41.265 "driver_specific": {} 00:05:41.265 } 00:05:41.265 ]' 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.265 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.265 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.265 [2024-07-14 21:06:38.166018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.265 [2024-07-14 21:06:38.166052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.265 [2024-07-14 21:06:38.166068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x53f1cb0 00:05:41.265 [2024-07-14 21:06:38.166078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.265 [2024-07-14 21:06:38.166892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.265 [2024-07-14 21:06:38.166915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.524 Passthru0 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.524 { 00:05:41.524 "name": "Malloc0", 00:05:41.524 "aliases": [ 00:05:41.524 "54d35a8a-ba45-4eca-9470-08168c52eabf" 00:05:41.524 ], 00:05:41.524 "product_name": "Malloc disk", 00:05:41.524 "block_size": 512, 00:05:41.524 "num_blocks": 16384, 00:05:41.524 "uuid": "54d35a8a-ba45-4eca-9470-08168c52eabf", 00:05:41.524 "assigned_rate_limits": { 00:05:41.524 "rw_ios_per_sec": 0, 00:05:41.524 "rw_mbytes_per_sec": 0, 00:05:41.524 "r_mbytes_per_sec": 0, 00:05:41.524 "w_mbytes_per_sec": 0 00:05:41.524 }, 00:05:41.524 "claimed": true, 00:05:41.524 "claim_type": "exclusive_write", 00:05:41.524 "zoned": false, 00:05:41.524 "supported_io_types": { 00:05:41.524 "read": true, 00:05:41.524 "write": true, 00:05:41.524 "unmap": true, 00:05:41.524 "write_zeroes": true, 00:05:41.524 "flush": true, 00:05:41.524 "reset": true, 00:05:41.524 "compare": false, 00:05:41.524 "compare_and_write": false, 00:05:41.524 "abort": true, 00:05:41.524 "nvme_admin": false, 00:05:41.524 "nvme_io": false 00:05:41.524 }, 00:05:41.524 "memory_domains": [ 00:05:41.524 { 00:05:41.524 "dma_device_id": "system", 00:05:41.524 "dma_device_type": 1 00:05:41.524 }, 00:05:41.524 { 00:05:41.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.524 "dma_device_type": 2 00:05:41.524 } 00:05:41.524 ], 00:05:41.524 "driver_specific": {} 00:05:41.524 }, 00:05:41.524 { 00:05:41.524 "name": "Passthru0", 00:05:41.524 "aliases": [ 00:05:41.524 "501aada3-dbd5-5191-b81e-7b984573ec2b" 00:05:41.524 ], 00:05:41.524 "product_name": "passthru", 00:05:41.524 "block_size": 512, 00:05:41.524 "num_blocks": 16384, 00:05:41.524 "uuid": "501aada3-dbd5-5191-b81e-7b984573ec2b", 00:05:41.524 "assigned_rate_limits": { 00:05:41.524 "rw_ios_per_sec": 0, 00:05:41.524 "rw_mbytes_per_sec": 0, 00:05:41.524 "r_mbytes_per_sec": 0, 00:05:41.524 "w_mbytes_per_sec": 0 00:05:41.524 }, 00:05:41.524 "claimed": false, 00:05:41.524 "zoned": false, 00:05:41.524 "supported_io_types": { 00:05:41.524 "read": true, 00:05:41.524 "write": true, 00:05:41.524 "unmap": true, 00:05:41.524 "write_zeroes": true, 00:05:41.524 "flush": true, 00:05:41.524 "reset": true, 00:05:41.524 "compare": false, 00:05:41.524 "compare_and_write": false, 00:05:41.524 "abort": true, 00:05:41.524 "nvme_admin": false, 00:05:41.524 "nvme_io": false 00:05:41.524 }, 00:05:41.524 "memory_domains": [ 00:05:41.524 { 00:05:41.524 "dma_device_id": "system", 00:05:41.524 "dma_device_type": 1 00:05:41.524 }, 00:05:41.524 { 00:05:41.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.524 "dma_device_type": 2 00:05:41.524 } 00:05:41.524 ], 00:05:41.524 "driver_specific": { 00:05:41.524 "passthru": { 00:05:41.524 "name": "Passthru0", 00:05:41.524 "base_bdev_name": "Malloc0" 00:05:41.524 } 00:05:41.524 } 00:05:41.524 } 00:05:41.524 ]' 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.524 21:06:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.524 00:05:41.524 real 0m0.286s 00:05:41.524 user 0m0.176s 00:05:41.524 sys 0m0.049s 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.524 21:06:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.524 ************************************ 00:05:41.524 END TEST rpc_integrity 00:05:41.524 ************************************ 00:05:41.524 21:06:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.524 21:06:38 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.524 21:06:38 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.524 21:06:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.524 ************************************ 00:05:41.524 START TEST rpc_plugins 00:05:41.524 ************************************ 00:05:41.524 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:41.524 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.524 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.524 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.524 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.524 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.524 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.524 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.524 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.783 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.783 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.783 { 00:05:41.783 "name": "Malloc1", 00:05:41.783 "aliases": [ 00:05:41.783 "59f9831b-0c2d-4528-b4c4-5c26681fb9d8" 00:05:41.783 ], 00:05:41.783 "product_name": "Malloc disk", 00:05:41.783 "block_size": 4096, 00:05:41.783 "num_blocks": 256, 00:05:41.783 "uuid": "59f9831b-0c2d-4528-b4c4-5c26681fb9d8", 00:05:41.783 "assigned_rate_limits": { 00:05:41.783 "rw_ios_per_sec": 0, 00:05:41.783 "rw_mbytes_per_sec": 0, 00:05:41.783 "r_mbytes_per_sec": 0, 00:05:41.783 "w_mbytes_per_sec": 0 00:05:41.783 }, 00:05:41.783 "claimed": false, 00:05:41.783 "zoned": false, 00:05:41.783 "supported_io_types": { 00:05:41.783 "read": true, 00:05:41.783 "write": true, 00:05:41.783 "unmap": true, 00:05:41.783 "write_zeroes": true, 00:05:41.783 "flush": true, 00:05:41.783 "reset": true, 00:05:41.784 "compare": false, 00:05:41.784 "compare_and_write": false, 00:05:41.784 "abort": true, 00:05:41.784 "nvme_admin": false, 00:05:41.784 "nvme_io": false 00:05:41.784 }, 00:05:41.784 "memory_domains": [ 00:05:41.784 { 00:05:41.784 "dma_device_id": "system", 00:05:41.784 "dma_device_type": 1 00:05:41.784 }, 00:05:41.784 { 00:05:41.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.784 "dma_device_type": 2 00:05:41.784 } 00:05:41.784 ], 00:05:41.784 "driver_specific": {} 00:05:41.784 } 00:05:41.784 ]' 00:05:41.784 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.784 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.784 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.784 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.784 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.784 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.784 21:06:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.784 00:05:41.784 real 0m0.145s 00:05:41.784 user 0m0.087s 00:05:41.784 sys 0m0.025s 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.784 21:06:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.784 ************************************ 00:05:41.784 END TEST rpc_plugins 00:05:41.784 ************************************ 00:05:41.784 21:06:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.784 21:06:38 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.784 21:06:38 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.784 21:06:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.784 ************************************ 00:05:41.784 START TEST rpc_trace_cmd_test 00:05:41.784 ************************************ 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:41.784 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3998291", 00:05:41.784 "tpoint_group_mask": "0x8", 00:05:41.784 "iscsi_conn": { 00:05:41.784 "mask": "0x2", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "scsi": { 00:05:41.784 "mask": "0x4", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "bdev": { 00:05:41.784 "mask": "0x8", 00:05:41.784 "tpoint_mask": "0xffffffffffffffff" 00:05:41.784 }, 00:05:41.784 "nvmf_rdma": { 00:05:41.784 "mask": "0x10", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "nvmf_tcp": { 00:05:41.784 "mask": "0x20", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "ftl": { 00:05:41.784 "mask": "0x40", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "blobfs": { 00:05:41.784 "mask": "0x80", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "dsa": { 00:05:41.784 "mask": "0x200", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "thread": { 00:05:41.784 "mask": "0x400", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "nvme_pcie": { 00:05:41.784 "mask": "0x800", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "iaa": { 00:05:41.784 "mask": "0x1000", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "nvme_tcp": { 00:05:41.784 "mask": "0x2000", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "bdev_nvme": { 00:05:41.784 "mask": "0x4000", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 }, 00:05:41.784 "sock": { 00:05:41.784 "mask": "0x8000", 00:05:41.784 "tpoint_mask": "0x0" 00:05:41.784 } 00:05:41.784 }' 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:41.784 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.042 00:05:42.042 real 0m0.229s 00:05:42.042 user 0m0.188s 00:05:42.042 sys 0m0.032s 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.042 21:06:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.042 ************************************ 00:05:42.042 END TEST rpc_trace_cmd_test 00:05:42.042 ************************************ 00:05:42.042 21:06:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.042 21:06:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.042 21:06:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.042 21:06:38 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.042 21:06:38 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.042 21:06:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.042 ************************************ 00:05:42.043 START TEST rpc_daemon_integrity 00:05:42.043 ************************************ 00:05:42.043 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:42.043 21:06:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.043 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.043 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.043 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.043 21:06:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.043 21:06:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.302 21:06:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.302 { 00:05:42.302 "name": "Malloc2", 00:05:42.302 "aliases": [ 00:05:42.302 "93f1abe6-16f1-4698-be1f-7ce428aa1fb1" 00:05:42.302 ], 00:05:42.302 "product_name": "Malloc disk", 00:05:42.302 "block_size": 512, 00:05:42.302 "num_blocks": 16384, 00:05:42.302 "uuid": "93f1abe6-16f1-4698-be1f-7ce428aa1fb1", 00:05:42.302 "assigned_rate_limits": { 00:05:42.302 "rw_ios_per_sec": 0, 00:05:42.302 "rw_mbytes_per_sec": 0, 00:05:42.302 "r_mbytes_per_sec": 0, 00:05:42.302 "w_mbytes_per_sec": 0 00:05:42.302 }, 00:05:42.302 "claimed": false, 00:05:42.302 "zoned": false, 00:05:42.302 "supported_io_types": { 00:05:42.302 "read": true, 00:05:42.302 "write": true, 00:05:42.302 "unmap": true, 00:05:42.302 "write_zeroes": true, 00:05:42.302 "flush": true, 00:05:42.302 "reset": true, 00:05:42.302 "compare": false, 00:05:42.302 "compare_and_write": false, 00:05:42.302 "abort": true, 00:05:42.302 "nvme_admin": false, 00:05:42.302 "nvme_io": false 00:05:42.302 }, 00:05:42.302 "memory_domains": [ 00:05:42.302 { 00:05:42.302 "dma_device_id": "system", 00:05:42.302 "dma_device_type": 1 00:05:42.302 }, 00:05:42.302 { 00:05:42.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.302 "dma_device_type": 2 00:05:42.302 } 00:05:42.302 ], 00:05:42.302 "driver_specific": {} 00:05:42.302 } 00:05:42.302 ]' 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 [2024-07-14 21:06:39.056301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.302 [2024-07-14 21:06:39.056332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.302 [2024-07-14 21:06:39.056349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x53e3570 00:05:42.302 [2024-07-14 21:06:39.056362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.302 [2024-07-14 21:06:39.057051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.302 [2024-07-14 21:06:39.057074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.302 Passthru0 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.302 { 00:05:42.302 "name": "Malloc2", 00:05:42.302 "aliases": [ 00:05:42.302 "93f1abe6-16f1-4698-be1f-7ce428aa1fb1" 00:05:42.302 ], 00:05:42.302 "product_name": "Malloc disk", 00:05:42.302 "block_size": 512, 00:05:42.302 "num_blocks": 16384, 00:05:42.302 "uuid": "93f1abe6-16f1-4698-be1f-7ce428aa1fb1", 00:05:42.302 "assigned_rate_limits": { 00:05:42.302 "rw_ios_per_sec": 0, 00:05:42.302 "rw_mbytes_per_sec": 0, 00:05:42.302 "r_mbytes_per_sec": 0, 00:05:42.302 "w_mbytes_per_sec": 0 00:05:42.302 }, 00:05:42.302 "claimed": true, 00:05:42.302 "claim_type": "exclusive_write", 00:05:42.302 "zoned": false, 00:05:42.302 "supported_io_types": { 00:05:42.302 "read": true, 00:05:42.302 "write": true, 00:05:42.302 "unmap": true, 00:05:42.302 "write_zeroes": true, 00:05:42.302 "flush": true, 00:05:42.302 "reset": true, 00:05:42.302 "compare": false, 00:05:42.302 "compare_and_write": false, 00:05:42.302 "abort": true, 00:05:42.302 "nvme_admin": false, 00:05:42.302 "nvme_io": false 00:05:42.302 }, 00:05:42.302 "memory_domains": [ 00:05:42.302 { 00:05:42.302 "dma_device_id": "system", 00:05:42.302 "dma_device_type": 1 00:05:42.302 }, 00:05:42.302 { 00:05:42.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.302 "dma_device_type": 2 00:05:42.302 } 00:05:42.302 ], 00:05:42.302 "driver_specific": {} 00:05:42.302 }, 00:05:42.302 { 00:05:42.302 "name": "Passthru0", 00:05:42.302 "aliases": [ 00:05:42.302 "c9911247-797c-54f1-8a56-560b88da0e3a" 00:05:42.302 ], 00:05:42.302 "product_name": "passthru", 00:05:42.302 "block_size": 512, 00:05:42.302 "num_blocks": 16384, 00:05:42.302 "uuid": "c9911247-797c-54f1-8a56-560b88da0e3a", 00:05:42.302 "assigned_rate_limits": { 00:05:42.302 "rw_ios_per_sec": 0, 00:05:42.302 "rw_mbytes_per_sec": 0, 00:05:42.302 "r_mbytes_per_sec": 0, 00:05:42.302 "w_mbytes_per_sec": 0 00:05:42.302 }, 00:05:42.302 "claimed": false, 00:05:42.302 "zoned": false, 00:05:42.302 "supported_io_types": { 00:05:42.302 "read": true, 00:05:42.302 "write": true, 00:05:42.302 "unmap": true, 00:05:42.302 "write_zeroes": true, 00:05:42.302 "flush": true, 00:05:42.302 "reset": true, 00:05:42.302 "compare": false, 00:05:42.302 "compare_and_write": false, 00:05:42.302 "abort": true, 00:05:42.302 "nvme_admin": false, 00:05:42.302 "nvme_io": false 00:05:42.302 }, 00:05:42.302 "memory_domains": [ 00:05:42.302 { 00:05:42.302 "dma_device_id": "system", 00:05:42.302 "dma_device_type": 1 00:05:42.302 }, 00:05:42.302 { 00:05:42.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.302 "dma_device_type": 2 00:05:42.302 } 00:05:42.302 ], 00:05:42.302 "driver_specific": { 00:05:42.302 "passthru": { 00:05:42.302 "name": "Passthru0", 00:05:42.302 "base_bdev_name": "Malloc2" 00:05:42.302 } 00:05:42.302 } 00:05:42.302 } 00:05:42.302 ]' 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.302 00:05:42.302 real 0m0.276s 00:05:42.302 user 0m0.165s 00:05:42.302 sys 0m0.046s 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.302 21:06:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 ************************************ 00:05:42.302 END TEST rpc_daemon_integrity 00:05:42.302 ************************************ 00:05:42.562 21:06:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.562 21:06:39 rpc -- rpc/rpc.sh@84 -- # killprocess 3998291 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@946 -- # '[' -z 3998291 ']' 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@950 -- # kill -0 3998291 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@951 -- # uname 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3998291 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3998291' 00:05:42.562 killing process with pid 3998291 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@965 -- # kill 3998291 00:05:42.562 21:06:39 rpc -- common/autotest_common.sh@970 -- # wait 3998291 00:05:42.821 00:05:42.821 real 0m2.012s 00:05:42.821 user 0m2.560s 00:05:42.821 sys 0m0.770s 00:05:42.821 21:06:39 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.821 21:06:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.821 ************************************ 00:05:42.821 END TEST rpc 00:05:42.821 ************************************ 00:05:42.821 21:06:39 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:42.821 21:06:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.821 21:06:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.821 21:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:42.821 ************************************ 00:05:42.821 START TEST skip_rpc 00:05:42.821 ************************************ 00:05:42.821 21:06:39 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.080 * Looking for test storage... 00:05:43.080 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:43.080 21:06:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:43.080 21:06:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:43.080 21:06:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.080 21:06:39 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.080 21:06:39 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.080 21:06:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.080 ************************************ 00:05:43.080 START TEST skip_rpc 00:05:43.080 ************************************ 00:05:43.080 21:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:43.080 21:06:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.080 21:06:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3998741 00:05:43.080 21:06:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.080 21:06:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.080 [2024-07-14 21:06:39.798106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:43.080 [2024-07-14 21:06:39.798176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998741 ] 00:05:43.080 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.080 [2024-07-14 21:06:39.863753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.080 [2024-07-14 21:06:39.900870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.353 21:06:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.353 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:48.353 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.353 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:48.353 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.353 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:48.353 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3998741 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3998741 ']' 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3998741 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3998741 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3998741' 00:05:48.354 killing process with pid 3998741 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3998741 00:05:48.354 21:06:44 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3998741 00:05:48.354 00:05:48.354 real 0m5.360s 00:05:48.354 user 0m5.127s 00:05:48.354 sys 0m0.274s 00:05:48.354 21:06:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.354 21:06:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 ************************************ 00:05:48.354 END TEST skip_rpc 00:05:48.354 ************************************ 00:05:48.354 21:06:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.354 21:06:45 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.354 21:06:45 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.354 21:06:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 ************************************ 00:05:48.354 START TEST skip_rpc_with_json 00:05:48.354 ************************************ 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3999815 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3999815 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3999815 ']' 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.354 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 [2024-07-14 21:06:45.248074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:48.354 [2024-07-14 21:06:45.248156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3999815 ] 00:05:48.614 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.614 [2024-07-14 21:06:45.315085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.614 [2024-07-14 21:06:45.351228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.874 [2024-07-14 21:06:45.537032] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:48.874 request: 00:05:48.874 { 00:05:48.874 "trtype": "tcp", 00:05:48.874 "method": "nvmf_get_transports", 00:05:48.874 "req_id": 1 00:05:48.874 } 00:05:48.874 Got JSON-RPC error response 00:05:48.874 response: 00:05:48.874 { 00:05:48.874 "code": -19, 00:05:48.874 "message": "No such device" 00:05:48.874 } 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.874 [2024-07-14 21:06:45.545133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.874 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:48.874 { 00:05:48.874 "subsystems": [ 00:05:48.874 { 00:05:48.874 "subsystem": "scheduler", 00:05:48.874 "config": [ 00:05:48.874 { 00:05:48.874 "method": "framework_set_scheduler", 00:05:48.874 "params": { 00:05:48.874 "name": "static" 00:05:48.874 } 00:05:48.874 } 00:05:48.874 ] 00:05:48.874 }, 00:05:48.874 { 00:05:48.874 "subsystem": "vmd", 00:05:48.875 "config": [] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "sock", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "sock_set_default_impl", 00:05:48.875 "params": { 00:05:48.875 "impl_name": "posix" 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "sock_impl_set_options", 00:05:48.875 "params": { 00:05:48.875 "impl_name": "ssl", 00:05:48.875 "recv_buf_size": 4096, 00:05:48.875 "send_buf_size": 4096, 00:05:48.875 "enable_recv_pipe": true, 00:05:48.875 "enable_quickack": false, 00:05:48.875 "enable_placement_id": 0, 00:05:48.875 "enable_zerocopy_send_server": true, 00:05:48.875 "enable_zerocopy_send_client": false, 00:05:48.875 "zerocopy_threshold": 0, 00:05:48.875 "tls_version": 0, 00:05:48.875 "enable_ktls": false 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "sock_impl_set_options", 00:05:48.875 "params": { 00:05:48.875 "impl_name": "posix", 00:05:48.875 "recv_buf_size": 2097152, 00:05:48.875 "send_buf_size": 2097152, 00:05:48.875 "enable_recv_pipe": true, 00:05:48.875 "enable_quickack": false, 00:05:48.875 "enable_placement_id": 0, 00:05:48.875 "enable_zerocopy_send_server": true, 00:05:48.875 "enable_zerocopy_send_client": false, 00:05:48.875 "zerocopy_threshold": 0, 00:05:48.875 "tls_version": 0, 00:05:48.875 "enable_ktls": false 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "iobuf", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "iobuf_set_options", 00:05:48.875 "params": { 00:05:48.875 "small_pool_count": 8192, 00:05:48.875 "large_pool_count": 1024, 00:05:48.875 "small_bufsize": 8192, 00:05:48.875 "large_bufsize": 135168 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "keyring", 00:05:48.875 "config": [] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "vfio_user_target", 00:05:48.875 "config": null 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "accel", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "accel_set_options", 00:05:48.875 "params": { 00:05:48.875 "small_cache_size": 128, 00:05:48.875 "large_cache_size": 16, 00:05:48.875 "task_count": 2048, 00:05:48.875 "sequence_count": 2048, 00:05:48.875 "buf_count": 2048 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "bdev", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "bdev_set_options", 00:05:48.875 "params": { 00:05:48.875 "bdev_io_pool_size": 65535, 00:05:48.875 "bdev_io_cache_size": 256, 00:05:48.875 "bdev_auto_examine": true, 00:05:48.875 "iobuf_small_cache_size": 128, 00:05:48.875 "iobuf_large_cache_size": 16 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "bdev_raid_set_options", 00:05:48.875 "params": { 00:05:48.875 "process_window_size_kb": 1024 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "bdev_nvme_set_options", 00:05:48.875 "params": { 00:05:48.875 "action_on_timeout": "none", 00:05:48.875 "timeout_us": 0, 00:05:48.875 "timeout_admin_us": 0, 00:05:48.875 "keep_alive_timeout_ms": 10000, 00:05:48.875 "arbitration_burst": 0, 00:05:48.875 "low_priority_weight": 0, 00:05:48.875 "medium_priority_weight": 0, 00:05:48.875 "high_priority_weight": 0, 00:05:48.875 "nvme_adminq_poll_period_us": 10000, 00:05:48.875 "nvme_ioq_poll_period_us": 0, 00:05:48.875 "io_queue_requests": 0, 00:05:48.875 "delay_cmd_submit": true, 00:05:48.875 "transport_retry_count": 4, 00:05:48.875 "bdev_retry_count": 3, 00:05:48.875 "transport_ack_timeout": 0, 00:05:48.875 "ctrlr_loss_timeout_sec": 0, 00:05:48.875 "reconnect_delay_sec": 0, 00:05:48.875 "fast_io_fail_timeout_sec": 0, 00:05:48.875 "disable_auto_failback": false, 00:05:48.875 "generate_uuids": false, 00:05:48.875 "transport_tos": 0, 00:05:48.875 "nvme_error_stat": false, 00:05:48.875 "rdma_srq_size": 0, 00:05:48.875 "io_path_stat": false, 00:05:48.875 "allow_accel_sequence": false, 00:05:48.875 "rdma_max_cq_size": 0, 00:05:48.875 "rdma_cm_event_timeout_ms": 0, 00:05:48.875 "dhchap_digests": [ 00:05:48.875 "sha256", 00:05:48.875 "sha384", 00:05:48.875 "sha512" 00:05:48.875 ], 00:05:48.875 "dhchap_dhgroups": [ 00:05:48.875 "null", 00:05:48.875 "ffdhe2048", 00:05:48.875 "ffdhe3072", 00:05:48.875 "ffdhe4096", 00:05:48.875 "ffdhe6144", 00:05:48.875 "ffdhe8192" 00:05:48.875 ] 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "bdev_nvme_set_hotplug", 00:05:48.875 "params": { 00:05:48.875 "period_us": 100000, 00:05:48.875 "enable": false 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "bdev_iscsi_set_options", 00:05:48.875 "params": { 00:05:48.875 "timeout_sec": 30 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "bdev_wait_for_examine" 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "nvmf", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "nvmf_set_config", 00:05:48.875 "params": { 00:05:48.875 "discovery_filter": "match_any", 00:05:48.875 "admin_cmd_passthru": { 00:05:48.875 "identify_ctrlr": false 00:05:48.875 } 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "nvmf_set_max_subsystems", 00:05:48.875 "params": { 00:05:48.875 "max_subsystems": 1024 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "nvmf_set_crdt", 00:05:48.875 "params": { 00:05:48.875 "crdt1": 0, 00:05:48.875 "crdt2": 0, 00:05:48.875 "crdt3": 0 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "nvmf_create_transport", 00:05:48.875 "params": { 00:05:48.875 "trtype": "TCP", 00:05:48.875 "max_queue_depth": 128, 00:05:48.875 "max_io_qpairs_per_ctrlr": 127, 00:05:48.875 "in_capsule_data_size": 4096, 00:05:48.875 "max_io_size": 131072, 00:05:48.875 "io_unit_size": 131072, 00:05:48.875 "max_aq_depth": 128, 00:05:48.875 "num_shared_buffers": 511, 00:05:48.875 "buf_cache_size": 4294967295, 00:05:48.875 "dif_insert_or_strip": false, 00:05:48.875 "zcopy": false, 00:05:48.875 "c2h_success": true, 00:05:48.875 "sock_priority": 0, 00:05:48.875 "abort_timeout_sec": 1, 00:05:48.875 "ack_timeout": 0, 00:05:48.875 "data_wr_pool_size": 0 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "nbd", 00:05:48.875 "config": [] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "ublk", 00:05:48.875 "config": [] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "vhost_blk", 00:05:48.875 "config": [] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "scsi", 00:05:48.875 "config": null 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "iscsi", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "iscsi_set_options", 00:05:48.875 "params": { 00:05:48.875 "node_base": "iqn.2016-06.io.spdk", 00:05:48.875 "max_sessions": 128, 00:05:48.875 "max_connections_per_session": 2, 00:05:48.875 "max_queue_depth": 64, 00:05:48.875 "default_time2wait": 2, 00:05:48.875 "default_time2retain": 20, 00:05:48.875 "first_burst_length": 8192, 00:05:48.875 "immediate_data": true, 00:05:48.875 "allow_duplicated_isid": false, 00:05:48.875 "error_recovery_level": 0, 00:05:48.875 "nop_timeout": 60, 00:05:48.875 "nop_in_interval": 30, 00:05:48.875 "disable_chap": false, 00:05:48.875 "require_chap": false, 00:05:48.875 "mutual_chap": false, 00:05:48.875 "chap_group": 0, 00:05:48.875 "max_large_datain_per_connection": 64, 00:05:48.875 "max_r2t_per_connection": 4, 00:05:48.875 "pdu_pool_size": 36864, 00:05:48.875 "immediate_data_pool_size": 16384, 00:05:48.875 "data_out_pool_size": 2048 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "vhost_scsi", 00:05:48.875 "config": [] 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 } 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3999815 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3999815 ']' 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3999815 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3999815 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3999815' 00:05:48.875 killing process with pid 3999815 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3999815 00:05:48.875 21:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3999815 00:05:49.444 21:06:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3999851 00:05:49.444 21:06:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:49.444 21:06:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3999851 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3999851 ']' 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3999851 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3999851 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3999851' 00:05:54.720 killing process with pid 3999851 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3999851 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3999851 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:54.720 00:05:54.720 real 0m6.174s 00:05:54.720 user 0m5.850s 00:05:54.720 sys 0m0.578s 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.720 ************************************ 00:05:54.720 END TEST skip_rpc_with_json 00:05:54.720 ************************************ 00:05:54.720 21:06:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.720 21:06:51 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.720 21:06:51 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.720 21:06:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.720 ************************************ 00:05:54.720 START TEST skip_rpc_with_delay 00:05:54.720 ************************************ 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.720 [2024-07-14 21:06:51.497301] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.720 [2024-07-14 21:06:51.497463] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.720 00:05:54.720 real 0m0.038s 00:05:54.720 user 0m0.016s 00:05:54.720 sys 0m0.021s 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.720 21:06:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:54.720 ************************************ 00:05:54.720 END TEST skip_rpc_with_delay 00:05:54.720 ************************************ 00:05:54.720 21:06:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:54.720 21:06:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:54.720 21:06:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:54.720 21:06:51 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.720 21:06:51 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.720 21:06:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.720 ************************************ 00:05:54.720 START TEST exit_on_failed_rpc_init 00:05:54.720 ************************************ 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4000960 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4000960 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 4000960 ']' 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.720 21:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.720 [2024-07-14 21:06:51.614132] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:54.720 [2024-07-14 21:06:51.614204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000960 ] 00:05:54.978 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.978 [2024-07-14 21:06:51.682477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.978 [2024-07-14 21:06:51.721548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.237 21:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.237 [2024-07-14 21:06:51.923316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:55.237 [2024-07-14 21:06:51.923380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000966 ] 00:05:55.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.237 [2024-07-14 21:06:51.989269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.237 [2024-07-14 21:06:52.027408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.237 [2024-07-14 21:06:52.027493] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:55.238 [2024-07-14 21:06:52.027506] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:55.238 [2024-07-14 21:06:52.027514] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4000960 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 4000960 ']' 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 4000960 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4000960 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4000960' 00:05:55.238 killing process with pid 4000960 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 4000960 00:05:55.238 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 4000960 00:05:55.806 00:05:55.806 real 0m0.829s 00:05:55.806 user 0m0.833s 00:05:55.806 sys 0m0.390s 00:05:55.806 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.806 21:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.806 ************************************ 00:05:55.806 END TEST exit_on_failed_rpc_init 00:05:55.806 ************************************ 00:05:55.806 21:06:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:55.806 00:05:55.806 real 0m12.813s 00:05:55.806 user 0m11.967s 00:05:55.806 sys 0m1.561s 00:05:55.806 21:06:52 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.806 21:06:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.806 ************************************ 00:05:55.806 END TEST skip_rpc 00:05:55.806 ************************************ 00:05:55.806 21:06:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:55.806 21:06:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.806 21:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.806 21:06:52 -- common/autotest_common.sh@10 -- # set +x 00:05:55.806 ************************************ 00:05:55.806 START TEST rpc_client 00:05:55.806 ************************************ 00:05:55.806 21:06:52 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:55.806 * Looking for test storage... 00:05:55.806 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:55.806 21:06:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:55.806 OK 00:05:55.806 21:06:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:55.806 00:05:55.806 real 0m0.129s 00:05:55.806 user 0m0.054s 00:05:55.806 sys 0m0.082s 00:05:55.806 21:06:52 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.806 21:06:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:55.807 ************************************ 00:05:55.807 END TEST rpc_client 00:05:55.807 ************************************ 00:05:56.066 21:06:52 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.066 21:06:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.066 21:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.066 21:06:52 -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 ************************************ 00:05:56.066 START TEST json_config 00:05:56.066 ************************************ 00:05:56.067 21:06:52 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:56.067 21:06:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.067 21:06:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.067 21:06:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.067 21:06:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.067 21:06:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.067 21:06:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.067 21:06:52 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.067 21:06:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@47 -- # : 0 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.067 21:06:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:56.067 WARNING: No tests are enabled so not running JSON configuration tests 00:05:56.067 21:06:52 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:56.067 00:05:56.067 real 0m0.109s 00:05:56.067 user 0m0.046s 00:05:56.067 sys 0m0.064s 00:05:56.067 21:06:52 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.067 21:06:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.067 ************************************ 00:05:56.067 END TEST json_config 00:05:56.067 ************************************ 00:05:56.067 21:06:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.067 21:06:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.067 21:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.067 21:06:52 -- common/autotest_common.sh@10 -- # set +x 00:05:56.067 ************************************ 00:05:56.067 START TEST json_config_extra_key 00:05:56.067 ************************************ 00:05:56.067 21:06:52 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.327 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.327 21:06:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:56.327 21:06:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.327 21:06:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.327 21:06:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.328 21:06:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.328 21:06:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.328 21:06:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.328 21:06:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.328 21:06:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.328 21:06:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:56.328 INFO: launching applications... 00:05:56.328 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4001370 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.328 Waiting for target to run... 00:05:56.328 21:06:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4001370 /var/tmp/spdk_tgt.sock 00:05:56.328 21:06:53 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 4001370 ']' 00:05:56.328 21:06:53 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.328 21:06:53 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.328 21:06:53 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.328 21:06:53 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.328 21:06:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.328 [2024-07-14 21:06:53.044322] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:56.328 [2024-07-14 21:06:53.044368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001370 ] 00:05:56.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.587 [2024-07-14 21:06:53.325866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.587 [2024-07-14 21:06:53.347414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.154 21:06:53 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.154 21:06:53 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.154 00:05:57.154 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.154 INFO: shutting down applications... 00:05:57.154 21:06:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4001370 ]] 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4001370 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4001370 00:05:57.154 21:06:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.721 21:06:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.721 21:06:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.721 21:06:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4001370 00:05:57.721 21:06:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.721 21:06:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:57.721 21:06:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.721 21:06:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.721 SPDK target shutdown done 00:05:57.721 21:06:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:57.721 Success 00:05:57.721 00:05:57.721 real 0m1.446s 00:05:57.721 user 0m1.179s 00:05:57.721 sys 0m0.389s 00:05:57.721 21:06:54 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.721 21:06:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.721 ************************************ 00:05:57.721 END TEST json_config_extra_key 00:05:57.721 ************************************ 00:05:57.721 21:06:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.721 21:06:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.721 21:06:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.721 21:06:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.721 ************************************ 00:05:57.721 START TEST alias_rpc 00:05:57.721 ************************************ 00:05:57.721 21:06:54 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.721 * Looking for test storage... 00:05:57.721 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:57.721 21:06:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.721 21:06:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4001691 00:05:57.721 21:06:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.721 21:06:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4001691 00:05:57.721 21:06:54 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 4001691 ']' 00:05:57.721 21:06:54 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.721 21:06:54 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.721 21:06:54 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.721 21:06:54 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.721 21:06:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.721 [2024-07-14 21:06:54.560759] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:57.721 [2024-07-14 21:06:54.560817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001691 ] 00:05:57.721 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.980 [2024-07-14 21:06:54.626639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.980 [2024-07-14 21:06:54.666251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.980 21:06:54 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.980 21:06:54 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:57.980 21:06:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:58.238 21:06:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4001691 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 4001691 ']' 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 4001691 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4001691 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4001691' 00:05:58.238 killing process with pid 4001691 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@965 -- # kill 4001691 00:05:58.238 21:06:55 alias_rpc -- common/autotest_common.sh@970 -- # wait 4001691 00:05:58.495 00:05:58.495 real 0m0.931s 00:05:58.495 user 0m0.905s 00:05:58.495 sys 0m0.408s 00:05:58.495 21:06:55 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.495 21:06:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.495 ************************************ 00:05:58.495 END TEST alias_rpc 00:05:58.495 ************************************ 00:05:58.753 21:06:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:58.753 21:06:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.753 21:06:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.753 21:06:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.753 21:06:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.753 ************************************ 00:05:58.753 START TEST spdkcli_tcp 00:05:58.753 ************************************ 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.753 * Looking for test storage... 00:05:58.753 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4001794 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4001794 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 4001794 ']' 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.753 21:06:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.753 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:58.753 [2024-07-14 21:06:55.577560] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:58.753 [2024-07-14 21:06:55.577649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001794 ] 00:05:58.753 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.753 [2024-07-14 21:06:55.645770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.011 [2024-07-14 21:06:55.686704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.011 [2024-07-14 21:06:55.686707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.011 21:06:55 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.011 21:06:55 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:59.011 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4001963 00:05:59.011 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.011 21:06:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:59.269 [ 00:05:59.269 "spdk_get_version", 00:05:59.269 "rpc_get_methods", 00:05:59.269 "trace_get_info", 00:05:59.269 "trace_get_tpoint_group_mask", 00:05:59.269 "trace_disable_tpoint_group", 00:05:59.269 "trace_enable_tpoint_group", 00:05:59.269 "trace_clear_tpoint_mask", 00:05:59.269 "trace_set_tpoint_mask", 00:05:59.269 "vfu_tgt_set_base_path", 00:05:59.269 "framework_get_pci_devices", 00:05:59.269 "framework_get_config", 00:05:59.269 "framework_get_subsystems", 00:05:59.269 "keyring_get_keys", 00:05:59.269 "iobuf_get_stats", 00:05:59.269 "iobuf_set_options", 00:05:59.269 "sock_get_default_impl", 00:05:59.269 "sock_set_default_impl", 00:05:59.269 "sock_impl_set_options", 00:05:59.269 "sock_impl_get_options", 00:05:59.269 "vmd_rescan", 00:05:59.269 "vmd_remove_device", 00:05:59.269 "vmd_enable", 00:05:59.269 "accel_get_stats", 00:05:59.269 "accel_set_options", 00:05:59.269 "accel_set_driver", 00:05:59.269 "accel_crypto_key_destroy", 00:05:59.269 "accel_crypto_keys_get", 00:05:59.269 "accel_crypto_key_create", 00:05:59.269 "accel_assign_opc", 00:05:59.269 "accel_get_module_info", 00:05:59.269 "accel_get_opc_assignments", 00:05:59.269 "notify_get_notifications", 00:05:59.269 "notify_get_types", 00:05:59.269 "bdev_get_histogram", 00:05:59.269 "bdev_enable_histogram", 00:05:59.269 "bdev_set_qos_limit", 00:05:59.269 "bdev_set_qd_sampling_period", 00:05:59.269 "bdev_get_bdevs", 00:05:59.269 "bdev_reset_iostat", 00:05:59.269 "bdev_get_iostat", 00:05:59.269 "bdev_examine", 00:05:59.269 "bdev_wait_for_examine", 00:05:59.269 "bdev_set_options", 00:05:59.269 "scsi_get_devices", 00:05:59.269 "thread_set_cpumask", 00:05:59.269 "framework_get_scheduler", 00:05:59.269 "framework_set_scheduler", 00:05:59.269 "framework_get_reactors", 00:05:59.269 "thread_get_io_channels", 00:05:59.269 "thread_get_pollers", 00:05:59.269 "thread_get_stats", 00:05:59.269 "framework_monitor_context_switch", 00:05:59.269 "spdk_kill_instance", 00:05:59.269 "log_enable_timestamps", 00:05:59.269 "log_get_flags", 00:05:59.269 "log_clear_flag", 00:05:59.269 "log_set_flag", 00:05:59.269 "log_get_level", 00:05:59.269 "log_set_level", 00:05:59.269 "log_get_print_level", 00:05:59.269 "log_set_print_level", 00:05:59.269 "framework_enable_cpumask_locks", 00:05:59.269 "framework_disable_cpumask_locks", 00:05:59.269 "framework_wait_init", 00:05:59.269 "framework_start_init", 00:05:59.269 "virtio_blk_create_transport", 00:05:59.269 "virtio_blk_get_transports", 00:05:59.269 "vhost_controller_set_coalescing", 00:05:59.269 "vhost_get_controllers", 00:05:59.269 "vhost_delete_controller", 00:05:59.269 "vhost_create_blk_controller", 00:05:59.269 "vhost_scsi_controller_remove_target", 00:05:59.269 "vhost_scsi_controller_add_target", 00:05:59.269 "vhost_start_scsi_controller", 00:05:59.269 "vhost_create_scsi_controller", 00:05:59.269 "ublk_recover_disk", 00:05:59.269 "ublk_get_disks", 00:05:59.269 "ublk_stop_disk", 00:05:59.269 "ublk_start_disk", 00:05:59.269 "ublk_destroy_target", 00:05:59.269 "ublk_create_target", 00:05:59.269 "nbd_get_disks", 00:05:59.269 "nbd_stop_disk", 00:05:59.269 "nbd_start_disk", 00:05:59.269 "env_dpdk_get_mem_stats", 00:05:59.269 "nvmf_stop_mdns_prr", 00:05:59.269 "nvmf_publish_mdns_prr", 00:05:59.269 "nvmf_subsystem_get_listeners", 00:05:59.269 "nvmf_subsystem_get_qpairs", 00:05:59.269 "nvmf_subsystem_get_controllers", 00:05:59.269 "nvmf_get_stats", 00:05:59.269 "nvmf_get_transports", 00:05:59.269 "nvmf_create_transport", 00:05:59.269 "nvmf_get_targets", 00:05:59.269 "nvmf_delete_target", 00:05:59.269 "nvmf_create_target", 00:05:59.269 "nvmf_subsystem_allow_any_host", 00:05:59.269 "nvmf_subsystem_remove_host", 00:05:59.269 "nvmf_subsystem_add_host", 00:05:59.269 "nvmf_ns_remove_host", 00:05:59.269 "nvmf_ns_add_host", 00:05:59.269 "nvmf_subsystem_remove_ns", 00:05:59.269 "nvmf_subsystem_add_ns", 00:05:59.269 "nvmf_subsystem_listener_set_ana_state", 00:05:59.269 "nvmf_discovery_get_referrals", 00:05:59.269 "nvmf_discovery_remove_referral", 00:05:59.269 "nvmf_discovery_add_referral", 00:05:59.269 "nvmf_subsystem_remove_listener", 00:05:59.269 "nvmf_subsystem_add_listener", 00:05:59.269 "nvmf_delete_subsystem", 00:05:59.269 "nvmf_create_subsystem", 00:05:59.269 "nvmf_get_subsystems", 00:05:59.269 "nvmf_set_crdt", 00:05:59.269 "nvmf_set_config", 00:05:59.269 "nvmf_set_max_subsystems", 00:05:59.269 "iscsi_get_histogram", 00:05:59.269 "iscsi_enable_histogram", 00:05:59.269 "iscsi_set_options", 00:05:59.269 "iscsi_get_auth_groups", 00:05:59.269 "iscsi_auth_group_remove_secret", 00:05:59.269 "iscsi_auth_group_add_secret", 00:05:59.269 "iscsi_delete_auth_group", 00:05:59.269 "iscsi_create_auth_group", 00:05:59.269 "iscsi_set_discovery_auth", 00:05:59.269 "iscsi_get_options", 00:05:59.269 "iscsi_target_node_request_logout", 00:05:59.269 "iscsi_target_node_set_redirect", 00:05:59.269 "iscsi_target_node_set_auth", 00:05:59.269 "iscsi_target_node_add_lun", 00:05:59.269 "iscsi_get_stats", 00:05:59.270 "iscsi_get_connections", 00:05:59.270 "iscsi_portal_group_set_auth", 00:05:59.270 "iscsi_start_portal_group", 00:05:59.270 "iscsi_delete_portal_group", 00:05:59.270 "iscsi_create_portal_group", 00:05:59.270 "iscsi_get_portal_groups", 00:05:59.270 "iscsi_delete_target_node", 00:05:59.270 "iscsi_target_node_remove_pg_ig_maps", 00:05:59.270 "iscsi_target_node_add_pg_ig_maps", 00:05:59.270 "iscsi_create_target_node", 00:05:59.270 "iscsi_get_target_nodes", 00:05:59.270 "iscsi_delete_initiator_group", 00:05:59.270 "iscsi_initiator_group_remove_initiators", 00:05:59.270 "iscsi_initiator_group_add_initiators", 00:05:59.270 "iscsi_create_initiator_group", 00:05:59.270 "iscsi_get_initiator_groups", 00:05:59.270 "keyring_linux_set_options", 00:05:59.270 "keyring_file_remove_key", 00:05:59.270 "keyring_file_add_key", 00:05:59.270 "vfu_virtio_create_scsi_endpoint", 00:05:59.270 "vfu_virtio_scsi_remove_target", 00:05:59.270 "vfu_virtio_scsi_add_target", 00:05:59.270 "vfu_virtio_create_blk_endpoint", 00:05:59.270 "vfu_virtio_delete_endpoint", 00:05:59.270 "iaa_scan_accel_module", 00:05:59.270 "dsa_scan_accel_module", 00:05:59.270 "ioat_scan_accel_module", 00:05:59.270 "accel_error_inject_error", 00:05:59.270 "bdev_iscsi_delete", 00:05:59.270 "bdev_iscsi_create", 00:05:59.270 "bdev_iscsi_set_options", 00:05:59.270 "bdev_virtio_attach_controller", 00:05:59.270 "bdev_virtio_scsi_get_devices", 00:05:59.270 "bdev_virtio_detach_controller", 00:05:59.270 "bdev_virtio_blk_set_hotplug", 00:05:59.270 "bdev_ftl_set_property", 00:05:59.270 "bdev_ftl_get_properties", 00:05:59.270 "bdev_ftl_get_stats", 00:05:59.270 "bdev_ftl_unmap", 00:05:59.270 "bdev_ftl_unload", 00:05:59.270 "bdev_ftl_delete", 00:05:59.270 "bdev_ftl_load", 00:05:59.270 "bdev_ftl_create", 00:05:59.270 "bdev_aio_delete", 00:05:59.270 "bdev_aio_rescan", 00:05:59.270 "bdev_aio_create", 00:05:59.270 "blobfs_create", 00:05:59.270 "blobfs_detect", 00:05:59.270 "blobfs_set_cache_size", 00:05:59.270 "bdev_zone_block_delete", 00:05:59.270 "bdev_zone_block_create", 00:05:59.270 "bdev_delay_delete", 00:05:59.270 "bdev_delay_create", 00:05:59.270 "bdev_delay_update_latency", 00:05:59.270 "bdev_split_delete", 00:05:59.270 "bdev_split_create", 00:05:59.270 "bdev_error_inject_error", 00:05:59.270 "bdev_error_delete", 00:05:59.270 "bdev_error_create", 00:05:59.270 "bdev_raid_set_options", 00:05:59.270 "bdev_raid_remove_base_bdev", 00:05:59.270 "bdev_raid_add_base_bdev", 00:05:59.270 "bdev_raid_delete", 00:05:59.270 "bdev_raid_create", 00:05:59.270 "bdev_raid_get_bdevs", 00:05:59.270 "bdev_lvol_set_parent_bdev", 00:05:59.270 "bdev_lvol_set_parent", 00:05:59.270 "bdev_lvol_check_shallow_copy", 00:05:59.270 "bdev_lvol_start_shallow_copy", 00:05:59.270 "bdev_lvol_grow_lvstore", 00:05:59.270 "bdev_lvol_get_lvols", 00:05:59.270 "bdev_lvol_get_lvstores", 00:05:59.270 "bdev_lvol_delete", 00:05:59.270 "bdev_lvol_set_read_only", 00:05:59.270 "bdev_lvol_resize", 00:05:59.270 "bdev_lvol_decouple_parent", 00:05:59.270 "bdev_lvol_inflate", 00:05:59.270 "bdev_lvol_rename", 00:05:59.270 "bdev_lvol_clone_bdev", 00:05:59.270 "bdev_lvol_clone", 00:05:59.270 "bdev_lvol_snapshot", 00:05:59.270 "bdev_lvol_create", 00:05:59.270 "bdev_lvol_delete_lvstore", 00:05:59.270 "bdev_lvol_rename_lvstore", 00:05:59.270 "bdev_lvol_create_lvstore", 00:05:59.270 "bdev_passthru_delete", 00:05:59.270 "bdev_passthru_create", 00:05:59.270 "bdev_nvme_cuse_unregister", 00:05:59.270 "bdev_nvme_cuse_register", 00:05:59.270 "bdev_opal_new_user", 00:05:59.270 "bdev_opal_set_lock_state", 00:05:59.270 "bdev_opal_delete", 00:05:59.270 "bdev_opal_get_info", 00:05:59.270 "bdev_opal_create", 00:05:59.270 "bdev_nvme_opal_revert", 00:05:59.270 "bdev_nvme_opal_init", 00:05:59.270 "bdev_nvme_send_cmd", 00:05:59.270 "bdev_nvme_get_path_iostat", 00:05:59.270 "bdev_nvme_get_mdns_discovery_info", 00:05:59.270 "bdev_nvme_stop_mdns_discovery", 00:05:59.270 "bdev_nvme_start_mdns_discovery", 00:05:59.270 "bdev_nvme_set_multipath_policy", 00:05:59.270 "bdev_nvme_set_preferred_path", 00:05:59.270 "bdev_nvme_get_io_paths", 00:05:59.270 "bdev_nvme_remove_error_injection", 00:05:59.270 "bdev_nvme_add_error_injection", 00:05:59.270 "bdev_nvme_get_discovery_info", 00:05:59.270 "bdev_nvme_stop_discovery", 00:05:59.270 "bdev_nvme_start_discovery", 00:05:59.270 "bdev_nvme_get_controller_health_info", 00:05:59.270 "bdev_nvme_disable_controller", 00:05:59.270 "bdev_nvme_enable_controller", 00:05:59.270 "bdev_nvme_reset_controller", 00:05:59.270 "bdev_nvme_get_transport_statistics", 00:05:59.270 "bdev_nvme_apply_firmware", 00:05:59.270 "bdev_nvme_detach_controller", 00:05:59.270 "bdev_nvme_get_controllers", 00:05:59.270 "bdev_nvme_attach_controller", 00:05:59.270 "bdev_nvme_set_hotplug", 00:05:59.270 "bdev_nvme_set_options", 00:05:59.270 "bdev_null_resize", 00:05:59.270 "bdev_null_delete", 00:05:59.270 "bdev_null_create", 00:05:59.270 "bdev_malloc_delete", 00:05:59.270 "bdev_malloc_create" 00:05:59.270 ] 00:05:59.270 21:06:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.270 21:06:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:59.270 21:06:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4001794 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 4001794 ']' 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 4001794 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4001794 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4001794' 00:05:59.270 killing process with pid 4001794 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 4001794 00:05:59.270 21:06:56 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 4001794 00:05:59.837 00:05:59.837 real 0m0.989s 00:05:59.837 user 0m1.646s 00:05:59.837 sys 0m0.448s 00:05:59.837 21:06:56 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.837 21:06:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.837 ************************************ 00:05:59.837 END TEST spdkcli_tcp 00:05:59.837 ************************************ 00:05:59.837 21:06:56 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.837 21:06:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.837 21:06:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.837 21:06:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.837 ************************************ 00:05:59.837 START TEST dpdk_mem_utility 00:05:59.837 ************************************ 00:05:59.837 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.837 * Looking for test storage... 00:05:59.838 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:59.838 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.838 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4002085 00:05:59.838 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.838 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4002085 00:05:59.838 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 4002085 ']' 00:05:59.838 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.838 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.838 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.838 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.838 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.838 [2024-07-14 21:06:56.647247] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:59.838 [2024-07-14 21:06:56.647331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002085 ] 00:05:59.838 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.838 [2024-07-14 21:06:56.713438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.096 [2024-07-14 21:06:56.751947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.096 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.096 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:00.096 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:00.096 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:00.096 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.096 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.096 { 00:06:00.096 "filename": "/tmp/spdk_mem_dump.txt" 00:06:00.096 } 00:06:00.096 21:06:56 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.096 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:00.096 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:00.096 1 heaps totaling size 814.000000 MiB 00:06:00.096 size: 814.000000 MiB heap id: 0 00:06:00.096 end heaps---------- 00:06:00.096 8 mempools totaling size 598.116089 MiB 00:06:00.096 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:00.096 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:00.096 size: 84.521057 MiB name: bdev_io_4002085 00:06:00.096 size: 51.011292 MiB name: evtpool_4002085 00:06:00.096 size: 50.003479 MiB name: msgpool_4002085 00:06:00.096 size: 21.763794 MiB name: PDU_Pool 00:06:00.096 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:00.096 size: 0.026123 MiB name: Session_Pool 00:06:00.096 end mempools------- 00:06:00.096 6 memzones totaling size 4.142822 MiB 00:06:00.096 size: 1.000366 MiB name: RG_ring_0_4002085 00:06:00.096 size: 1.000366 MiB name: RG_ring_1_4002085 00:06:00.096 size: 1.000366 MiB name: RG_ring_4_4002085 00:06:00.096 size: 1.000366 MiB name: RG_ring_5_4002085 00:06:00.096 size: 0.125366 MiB name: RG_ring_2_4002085 00:06:00.096 size: 0.015991 MiB name: RG_ring_3_4002085 00:06:00.096 end memzones------- 00:06:00.096 21:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:00.354 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:00.354 list of free elements. size: 12.519348 MiB 00:06:00.354 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:00.354 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:00.354 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:00.354 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:00.354 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:00.355 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:00.355 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:00.355 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:00.355 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:00.355 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:00.355 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:00.355 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:00.355 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:00.355 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:00.355 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:00.355 list of standard malloc elements. size: 199.218079 MiB 00:06:00.355 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:00.355 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:00.355 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:00.355 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:00.355 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:00.355 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:00.355 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:00.355 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:00.355 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:00.355 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:00.355 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:00.355 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:00.355 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:00.355 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:00.355 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:00.355 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:00.355 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:00.355 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:00.355 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:00.355 list of memzone associated elements. size: 602.262573 MiB 00:06:00.355 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:00.355 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:00.355 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:00.355 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:00.355 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:00.355 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4002085_0 00:06:00.355 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:00.355 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4002085_0 00:06:00.355 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:00.355 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4002085_0 00:06:00.355 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:00.355 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:00.355 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:00.355 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:00.355 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:00.355 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4002085 00:06:00.355 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:00.355 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4002085 00:06:00.355 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:00.355 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4002085 00:06:00.355 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:00.355 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:00.355 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:00.355 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:00.355 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:00.355 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:00.355 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:00.355 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:00.355 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:00.355 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4002085 00:06:00.355 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:00.355 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4002085 00:06:00.355 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:00.355 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4002085 00:06:00.355 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:00.355 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4002085 00:06:00.355 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:00.355 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4002085 00:06:00.355 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:00.355 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:00.355 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:00.355 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:00.355 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:00.355 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:00.355 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:00.355 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4002085 00:06:00.355 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:00.355 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:00.355 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:00.355 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:00.355 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:00.355 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4002085 00:06:00.355 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:00.355 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:00.355 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:00.355 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4002085 00:06:00.355 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:00.355 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4002085 00:06:00.355 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:00.355 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:00.355 21:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:00.355 21:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4002085 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 4002085 ']' 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 4002085 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002085 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002085' 00:06:00.355 killing process with pid 4002085 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 4002085 00:06:00.355 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 4002085 00:06:00.614 00:06:00.614 real 0m0.847s 00:06:00.614 user 0m0.726s 00:06:00.614 sys 0m0.410s 00:06:00.614 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.614 21:06:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.614 ************************************ 00:06:00.614 END TEST dpdk_mem_utility 00:06:00.614 ************************************ 00:06:00.614 21:06:57 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:00.614 21:06:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.614 21:06:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.614 21:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.614 ************************************ 00:06:00.614 START TEST event 00:06:00.614 ************************************ 00:06:00.614 21:06:57 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:00.872 * Looking for test storage... 00:06:00.872 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:00.872 21:06:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:00.872 21:06:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.872 21:06:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.872 21:06:57 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:00.872 21:06:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.872 21:06:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.872 ************************************ 00:06:00.873 START TEST event_perf 00:06:00.873 ************************************ 00:06:00.873 21:06:57 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.873 Running I/O for 1 seconds...[2024-07-14 21:06:57.608139] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:00.873 [2024-07-14 21:06:57.608245] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002403 ] 00:06:00.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.873 [2024-07-14 21:06:57.679530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.873 [2024-07-14 21:06:57.720440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.873 [2024-07-14 21:06:57.720538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.873 [2024-07-14 21:06:57.720626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.873 [2024-07-14 21:06:57.720629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.251 Running I/O for 1 seconds... 00:06:02.251 lcore 0: 201265 00:06:02.251 lcore 1: 201263 00:06:02.251 lcore 2: 201264 00:06:02.251 lcore 3: 201266 00:06:02.251 done. 00:06:02.251 00:06:02.251 real 0m1.184s 00:06:02.251 user 0m4.085s 00:06:02.251 sys 0m0.097s 00:06:02.251 21:06:58 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.251 21:06:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.251 ************************************ 00:06:02.251 END TEST event_perf 00:06:02.251 ************************************ 00:06:02.251 21:06:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.251 21:06:58 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:02.251 21:06:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.251 21:06:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.251 ************************************ 00:06:02.251 START TEST event_reactor 00:06:02.251 ************************************ 00:06:02.251 21:06:58 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.251 [2024-07-14 21:06:58.868225] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:02.251 [2024-07-14 21:06:58.868307] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002613 ] 00:06:02.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.251 [2024-07-14 21:06:58.936884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.251 [2024-07-14 21:06:58.974072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.188 test_start 00:06:03.188 oneshot 00:06:03.188 tick 100 00:06:03.188 tick 100 00:06:03.188 tick 250 00:06:03.188 tick 100 00:06:03.188 tick 100 00:06:03.188 tick 100 00:06:03.188 tick 250 00:06:03.188 tick 500 00:06:03.188 tick 100 00:06:03.188 tick 100 00:06:03.188 tick 250 00:06:03.188 tick 100 00:06:03.188 tick 100 00:06:03.188 test_end 00:06:03.188 00:06:03.188 real 0m1.174s 00:06:03.188 user 0m1.090s 00:06:03.188 sys 0m0.081s 00:06:03.188 21:07:00 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.188 21:07:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:03.188 ************************************ 00:06:03.188 END TEST event_reactor 00:06:03.188 ************************************ 00:06:03.188 21:07:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.188 21:07:00 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:03.188 21:07:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.188 21:07:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.446 ************************************ 00:06:03.446 START TEST event_reactor_perf 00:06:03.446 ************************************ 00:06:03.446 21:07:00 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.446 [2024-07-14 21:07:00.118328] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:03.446 [2024-07-14 21:07:00.118465] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002759 ] 00:06:03.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.446 [2024-07-14 21:07:00.189095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.446 [2024-07-14 21:07:00.227076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.383 test_start 00:06:04.383 test_end 00:06:04.383 Performance: 974271 events per second 00:06:04.383 00:06:04.383 real 0m1.181s 00:06:04.383 user 0m1.091s 00:06:04.383 sys 0m0.087s 00:06:04.383 21:07:01 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.383 21:07:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.383 ************************************ 00:06:04.383 END TEST event_reactor_perf 00:06:04.383 ************************************ 00:06:04.642 21:07:01 event -- event/event.sh@49 -- # uname -s 00:06:04.642 21:07:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.642 21:07:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.642 21:07:01 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.642 21:07:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.642 21:07:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.642 ************************************ 00:06:04.642 START TEST event_scheduler 00:06:04.642 ************************************ 00:06:04.642 21:07:01 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.642 * Looking for test storage... 00:06:04.642 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:04.642 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.642 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4003037 00:06:04.642 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.642 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.642 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4003037 00:06:04.642 21:07:01 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 4003037 ']' 00:06:04.642 21:07:01 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.642 21:07:01 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.642 21:07:01 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.642 21:07:01 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.642 21:07:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.642 [2024-07-14 21:07:01.491889] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:04.642 [2024-07-14 21:07:01.491976] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003037 ] 00:06:04.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.901 [2024-07-14 21:07:01.557563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.901 [2024-07-14 21:07:01.599604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.901 [2024-07-14 21:07:01.599688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.901 [2024-07-14 21:07:01.599769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.901 [2024-07-14 21:07:01.599770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.901 21:07:01 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.901 21:07:01 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:04.901 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:04.901 21:07:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.901 21:07:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.901 POWER: Env isn't set yet! 00:06:04.901 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:04.901 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.901 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.901 POWER: Attempting to initialise PSTAT power management... 00:06:04.901 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:04.901 POWER: Initialized successfully for lcore 0 power management 00:06:04.901 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:04.901 POWER: Initialized successfully for lcore 1 power management 00:06:04.901 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:04.901 POWER: Initialized successfully for lcore 2 power management 00:06:04.901 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:04.901 POWER: Initialized successfully for lcore 3 power management 00:06:04.901 [2024-07-14 21:07:01.694716] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:04.902 [2024-07-14 21:07:01.694730] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:04.902 [2024-07-14 21:07:01.694740] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:04.902 21:07:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.902 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:04.902 21:07:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.902 21:07:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.902 [2024-07-14 21:07:01.756634] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.902 21:07:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.902 21:07:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.902 21:07:01 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.902 21:07:01 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.902 21:07:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.902 ************************************ 00:06:04.902 START TEST scheduler_create_thread 00:06:04.902 ************************************ 00:06:04.902 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:04.902 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.902 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.902 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 2 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 3 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 4 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 5 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 6 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 7 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 8 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 9 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.161 10 00:06:05.161 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.162 21:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.096 21:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.096 21:07:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.096 21:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.096 21:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 21:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.474 21:07:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.474 21:07:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.474 21:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.474 21:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.577 21:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.577 00:06:08.577 real 0m3.378s 00:06:08.577 user 0m0.027s 00:06:08.577 sys 0m0.005s 00:06:08.577 21:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.578 21:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.578 ************************************ 00:06:08.578 END TEST scheduler_create_thread 00:06:08.578 ************************************ 00:06:08.578 21:07:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.578 21:07:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4003037 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 4003037 ']' 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 4003037 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4003037 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4003037' 00:06:08.578 killing process with pid 4003037 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 4003037 00:06:08.578 21:07:05 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 4003037 00:06:08.841 [2024-07-14 21:07:05.556412] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.841 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:08.841 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:08.841 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:08.841 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:08.841 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:08.841 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:08.841 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:08.841 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:09.100 00:06:09.100 real 0m4.405s 00:06:09.100 user 0m7.792s 00:06:09.100 sys 0m0.387s 00:06:09.100 21:07:05 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.100 21:07:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.100 ************************************ 00:06:09.100 END TEST event_scheduler 00:06:09.100 ************************************ 00:06:09.100 21:07:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:09.100 21:07:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:09.100 21:07:05 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.100 21:07:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.100 21:07:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.100 ************************************ 00:06:09.100 START TEST app_repeat 00:06:09.100 ************************************ 00:06:09.100 21:07:05 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4003889 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4003889' 00:06:09.100 Process app_repeat pid: 4003889 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:09.100 spdk_app_start Round 0 00:06:09.100 21:07:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4003889 /var/tmp/spdk-nbd.sock 00:06:09.100 21:07:05 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4003889 ']' 00:06:09.100 21:07:05 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.100 21:07:05 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.100 21:07:05 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.100 21:07:05 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.100 21:07:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.100 [2024-07-14 21:07:05.886453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:09.100 [2024-07-14 21:07:05.886540] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003889 ] 00:06:09.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.100 [2024-07-14 21:07:05.957129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.100 [2024-07-14 21:07:05.994788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.100 [2024-07-14 21:07:05.994790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.361 21:07:06 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.361 21:07:06 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:09.361 21:07:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.361 Malloc0 00:06:09.361 21:07:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.622 Malloc1 00:06:09.622 21:07:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.622 21:07:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.882 /dev/nbd0 00:06:09.882 21:07:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.882 21:07:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.882 1+0 records in 00:06:09.882 1+0 records out 00:06:09.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221306 s, 18.5 MB/s 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:09.882 21:07:06 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:09.882 21:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.882 21:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.882 21:07:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.141 /dev/nbd1 00:06:10.141 21:07:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.141 21:07:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.141 1+0 records in 00:06:10.141 1+0 records out 00:06:10.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235822 s, 17.4 MB/s 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:10.141 21:07:06 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:10.141 21:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.142 21:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.142 21:07:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.142 21:07:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.142 21:07:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.142 21:07:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.142 { 00:06:10.142 "nbd_device": "/dev/nbd0", 00:06:10.142 "bdev_name": "Malloc0" 00:06:10.142 }, 00:06:10.142 { 00:06:10.142 "nbd_device": "/dev/nbd1", 00:06:10.142 "bdev_name": "Malloc1" 00:06:10.142 } 00:06:10.142 ]' 00:06:10.142 21:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.142 { 00:06:10.142 "nbd_device": "/dev/nbd0", 00:06:10.142 "bdev_name": "Malloc0" 00:06:10.142 }, 00:06:10.142 { 00:06:10.142 "nbd_device": "/dev/nbd1", 00:06:10.142 "bdev_name": "Malloc1" 00:06:10.142 } 00:06:10.142 ]' 00:06:10.142 21:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.401 /dev/nbd1' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.401 /dev/nbd1' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.401 256+0 records in 00:06:10.401 256+0 records out 00:06:10.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110353 s, 95.0 MB/s 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.401 256+0 records in 00:06:10.401 256+0 records out 00:06:10.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202284 s, 51.8 MB/s 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.401 256+0 records in 00:06:10.401 256+0 records out 00:06:10.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218363 s, 48.0 MB/s 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.401 21:07:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.402 21:07:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.402 21:07:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.402 21:07:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.661 21:07:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.921 21:07:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.921 21:07:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.181 21:07:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.441 [2024-07-14 21:07:08.140557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.441 [2024-07-14 21:07:08.175496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.441 [2024-07-14 21:07:08.175501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.441 [2024-07-14 21:07:08.215418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.441 [2024-07-14 21:07:08.215465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.732 21:07:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.732 21:07:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.732 spdk_app_start Round 1 00:06:14.732 21:07:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4003889 /var/tmp/spdk-nbd.sock 00:06:14.732 21:07:10 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4003889 ']' 00:06:14.732 21:07:10 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.732 21:07:10 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.732 21:07:10 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.732 21:07:10 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.732 21:07:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.732 21:07:11 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.732 21:07:11 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:14.732 21:07:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.732 Malloc0 00:06:14.732 21:07:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.732 Malloc1 00:06:14.732 21:07:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.732 21:07:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.992 /dev/nbd0 00:06:14.992 21:07:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.992 21:07:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.992 1+0 records in 00:06:14.992 1+0 records out 00:06:14.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221928 s, 18.5 MB/s 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:14.992 21:07:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:14.992 21:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.992 21:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.992 21:07:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.992 /dev/nbd1 00:06:15.251 21:07:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.251 21:07:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.251 21:07:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:15.251 21:07:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:15.251 21:07:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:15.251 21:07:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:15.251 21:07:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:15.251 21:07:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:15.251 21:07:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:15.252 21:07:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:15.252 21:07:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.252 1+0 records in 00:06:15.252 1+0 records out 00:06:15.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180462 s, 22.7 MB/s 00:06:15.252 21:07:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:15.252 21:07:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:15.252 21:07:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:15.252 21:07:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:15.252 21:07:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:15.252 21:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.252 21:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.252 21:07:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.252 21:07:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.252 21:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.252 { 00:06:15.252 "nbd_device": "/dev/nbd0", 00:06:15.252 "bdev_name": "Malloc0" 00:06:15.252 }, 00:06:15.252 { 00:06:15.252 "nbd_device": "/dev/nbd1", 00:06:15.252 "bdev_name": "Malloc1" 00:06:15.252 } 00:06:15.252 ]' 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.252 { 00:06:15.252 "nbd_device": "/dev/nbd0", 00:06:15.252 "bdev_name": "Malloc0" 00:06:15.252 }, 00:06:15.252 { 00:06:15.252 "nbd_device": "/dev/nbd1", 00:06:15.252 "bdev_name": "Malloc1" 00:06:15.252 } 00:06:15.252 ]' 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.252 /dev/nbd1' 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.252 /dev/nbd1' 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.252 21:07:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.512 256+0 records in 00:06:15.512 256+0 records out 00:06:15.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113909 s, 92.1 MB/s 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.512 256+0 records in 00:06:15.512 256+0 records out 00:06:15.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206516 s, 50.8 MB/s 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.512 256+0 records in 00:06:15.512 256+0 records out 00:06:15.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219322 s, 47.8 MB/s 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.512 21:07:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.772 21:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.031 21:07:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.031 21:07:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.290 21:07:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.549 [2024-07-14 21:07:13.213964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.549 [2024-07-14 21:07:13.248575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.549 [2024-07-14 21:07:13.248578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.549 [2024-07-14 21:07:13.288855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.549 [2024-07-14 21:07:13.288902] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.839 21:07:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.839 21:07:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:19.839 spdk_app_start Round 2 00:06:19.839 21:07:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4003889 /var/tmp/spdk-nbd.sock 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4003889 ']' 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:19.839 21:07:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.839 Malloc0 00:06:19.839 21:07:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.839 Malloc1 00:06:19.839 21:07:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.839 /dev/nbd0 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.839 21:07:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.839 1+0 records in 00:06:19.839 1+0 records out 00:06:19.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219457 s, 18.7 MB/s 00:06:19.839 21:07:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.098 /dev/nbd1 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.098 1+0 records in 00:06:20.098 1+0 records out 00:06:20.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023833 s, 17.2 MB/s 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.098 21:07:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.098 21:07:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.357 21:07:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.357 { 00:06:20.357 "nbd_device": "/dev/nbd0", 00:06:20.357 "bdev_name": "Malloc0" 00:06:20.357 }, 00:06:20.357 { 00:06:20.357 "nbd_device": "/dev/nbd1", 00:06:20.357 "bdev_name": "Malloc1" 00:06:20.357 } 00:06:20.357 ]' 00:06:20.357 21:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.357 { 00:06:20.357 "nbd_device": "/dev/nbd0", 00:06:20.357 "bdev_name": "Malloc0" 00:06:20.357 }, 00:06:20.357 { 00:06:20.357 "nbd_device": "/dev/nbd1", 00:06:20.357 "bdev_name": "Malloc1" 00:06:20.357 } 00:06:20.357 ]' 00:06:20.357 21:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.358 /dev/nbd1' 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.358 /dev/nbd1' 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.358 256+0 records in 00:06:20.358 256+0 records out 00:06:20.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106197 s, 98.7 MB/s 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.358 256+0 records in 00:06:20.358 256+0 records out 00:06:20.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205248 s, 51.1 MB/s 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.358 256+0 records in 00:06:20.358 256+0 records out 00:06:20.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219248 s, 47.8 MB/s 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.358 21:07:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.617 21:07:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.876 21:07:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.877 21:07:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.877 21:07:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.135 21:07:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.135 21:07:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.394 21:07:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.394 [2024-07-14 21:07:18.229176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.394 [2024-07-14 21:07:18.263584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.394 [2024-07-14 21:07:18.263587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.653 [2024-07-14 21:07:18.303643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.653 [2024-07-14 21:07:18.303684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.189 21:07:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4003889 /var/tmp/spdk-nbd.sock 00:06:24.189 21:07:21 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4003889 ']' 00:06:24.189 21:07:21 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.189 21:07:21 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.189 21:07:21 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.189 21:07:21 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.189 21:07:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:24.447 21:07:21 event.app_repeat -- event/event.sh@39 -- # killprocess 4003889 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 4003889 ']' 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 4003889 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4003889 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4003889' 00:06:24.447 killing process with pid 4003889 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@965 -- # kill 4003889 00:06:24.447 21:07:21 event.app_repeat -- common/autotest_common.sh@970 -- # wait 4003889 00:06:24.706 spdk_app_start is called in Round 0. 00:06:24.706 Shutdown signal received, stop current app iteration 00:06:24.706 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:24.706 spdk_app_start is called in Round 1. 00:06:24.706 Shutdown signal received, stop current app iteration 00:06:24.706 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:24.706 spdk_app_start is called in Round 2. 00:06:24.706 Shutdown signal received, stop current app iteration 00:06:24.706 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:24.706 spdk_app_start is called in Round 3. 00:06:24.706 Shutdown signal received, stop current app iteration 00:06:24.706 21:07:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.706 21:07:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:24.706 00:06:24.706 real 0m15.599s 00:06:24.706 user 0m33.137s 00:06:24.706 sys 0m3.083s 00:06:24.706 21:07:21 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.706 21:07:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.706 ************************************ 00:06:24.706 END TEST app_repeat 00:06:24.707 ************************************ 00:06:24.707 21:07:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.707 21:07:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.707 21:07:21 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.707 21:07:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.707 21:07:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.707 ************************************ 00:06:24.707 START TEST cpu_locks 00:06:24.707 ************************************ 00:06:24.707 21:07:21 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.966 * Looking for test storage... 00:06:24.966 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:24.966 21:07:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.966 21:07:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.966 21:07:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.966 21:07:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.966 21:07:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.966 21:07:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.966 21:07:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 ************************************ 00:06:24.966 START TEST default_locks 00:06:24.966 ************************************ 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4006869 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4006869 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 4006869 ']' 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.966 21:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 [2024-07-14 21:07:21.717746] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:24.966 [2024-07-14 21:07:21.717830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006869 ] 00:06:24.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.966 [2024-07-14 21:07:21.785717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.966 [2024-07-14 21:07:21.823436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.225 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.225 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:25.226 21:07:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4006869 00:06:25.226 21:07:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4006869 00:06:25.226 21:07:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.161 lslocks: write error 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4006869 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 4006869 ']' 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 4006869 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4006869 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4006869' 00:06:26.161 killing process with pid 4006869 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 4006869 00:06:26.161 21:07:22 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 4006869 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4006869 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4006869 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4006869 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 4006869 ']' 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.161 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4006869) - No such process 00:06:26.161 ERROR: process (pid: 4006869) is no longer running 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:26.161 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.162 00:06:26.162 real 0m1.368s 00:06:26.162 user 0m1.349s 00:06:26.162 sys 0m0.659s 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.162 21:07:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.162 ************************************ 00:06:26.162 END TEST default_locks 00:06:26.162 ************************************ 00:06:26.425 21:07:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.425 21:07:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.425 21:07:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.425 21:07:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.425 ************************************ 00:06:26.425 START TEST default_locks_via_rpc 00:06:26.425 ************************************ 00:06:26.425 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:26.425 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4007115 00:06:26.425 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4007115 00:06:26.426 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.426 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4007115 ']' 00:06:26.426 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.426 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.426 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.426 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.426 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.426 [2024-07-14 21:07:23.151081] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:26.426 [2024-07-14 21:07:23.151140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007115 ] 00:06:26.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.426 [2024-07-14 21:07:23.217514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.426 [2024-07-14 21:07:23.257032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.689 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4007115 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4007115 00:06:26.690 21:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4007115 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 4007115 ']' 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 4007115 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007115 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007115' 00:06:27.258 killing process with pid 4007115 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 4007115 00:06:27.258 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 4007115 00:06:27.517 00:06:27.517 real 0m1.222s 00:06:27.517 user 0m1.182s 00:06:27.517 sys 0m0.584s 00:06:27.518 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.518 21:07:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.518 ************************************ 00:06:27.518 END TEST default_locks_via_rpc 00:06:27.518 ************************************ 00:06:27.518 21:07:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.518 21:07:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.518 21:07:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.518 21:07:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.776 ************************************ 00:06:27.776 START TEST non_locking_app_on_locked_coremask 00:06:27.776 ************************************ 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4007374 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4007374 /var/tmp/spdk.sock 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4007374 ']' 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.776 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.776 [2024-07-14 21:07:24.436826] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:27.776 [2024-07-14 21:07:24.436875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007374 ] 00:06:27.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.776 [2024-07-14 21:07:24.496913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.776 [2024-07-14 21:07:24.536894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4007438 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4007438 /var/tmp/spdk2.sock 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4007438 ']' 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.035 21:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.035 [2024-07-14 21:07:24.745831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:28.035 [2024-07-14 21:07:24.745921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007438 ] 00:06:28.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.035 [2024-07-14 21:07:24.840732] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.035 [2024-07-14 21:07:24.840761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.035 [2024-07-14 21:07:24.916103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.971 21:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.971 21:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:28.971 21:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4007374 00:06:28.971 21:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4007374 00:06:28.971 21:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.538 lslocks: write error 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4007374 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4007374 ']' 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4007374 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007374 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007374' 00:06:29.538 killing process with pid 4007374 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4007374 00:06:29.538 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4007374 00:06:30.105 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4007438 00:06:30.105 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4007438 ']' 00:06:30.105 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4007438 00:06:30.105 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:30.105 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.105 21:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007438 00:06:30.105 21:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.105 21:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.105 21:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007438' 00:06:30.105 killing process with pid 4007438 00:06:30.105 21:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4007438 00:06:30.105 21:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4007438 00:06:30.673 00:06:30.673 real 0m2.869s 00:06:30.673 user 0m2.961s 00:06:30.673 sys 0m1.112s 00:06:30.673 21:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.673 21:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.673 ************************************ 00:06:30.673 END TEST non_locking_app_on_locked_coremask 00:06:30.673 ************************************ 00:06:30.673 21:07:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.673 21:07:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.673 21:07:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.673 21:07:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.673 ************************************ 00:06:30.673 START TEST locking_app_on_unlocked_coremask 00:06:30.673 ************************************ 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4007948 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4007948 /var/tmp/spdk.sock 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4007948 ']' 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.673 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.673 [2024-07-14 21:07:27.396139] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.673 [2024-07-14 21:07:27.396201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007948 ] 00:06:30.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.673 [2024-07-14 21:07:27.462832] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.673 [2024-07-14 21:07:27.462861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.673 [2024-07-14 21:07:27.500461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4007958 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4007958 /var/tmp/spdk2.sock 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4007958 ']' 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.931 21:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.931 [2024-07-14 21:07:27.705820] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.931 [2024-07-14 21:07:27.705903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007958 ] 00:06:30.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.931 [2024-07-14 21:07:27.800880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.189 [2024-07-14 21:07:27.874733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.871 21:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.871 21:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:31.871 21:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4007958 00:06:31.871 21:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4007958 00:06:31.871 21:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.806 lslocks: write error 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4007948 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4007948 ']' 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 4007948 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007948 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007948' 00:06:33.064 killing process with pid 4007948 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 4007948 00:06:33.064 21:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 4007948 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4007958 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4007958 ']' 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 4007958 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007958 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007958' 00:06:33.631 killing process with pid 4007958 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 4007958 00:06:33.631 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 4007958 00:06:33.889 00:06:33.889 real 0m3.314s 00:06:33.889 user 0m3.456s 00:06:33.889 sys 0m1.240s 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.889 ************************************ 00:06:33.889 END TEST locking_app_on_unlocked_coremask 00:06:33.889 ************************************ 00:06:33.889 21:07:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.889 21:07:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.889 21:07:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.889 21:07:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.889 ************************************ 00:06:33.889 START TEST locking_app_on_locked_coremask 00:06:33.889 ************************************ 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4008534 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4008534 /var/tmp/spdk.sock 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4008534 ']' 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.889 21:07:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.889 [2024-07-14 21:07:30.781685] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:33.889 [2024-07-14 21:07:30.781731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008534 ] 00:06:34.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.149 [2024-07-14 21:07:30.847512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.149 [2024-07-14 21:07:30.883173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.408 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.408 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:34.408 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4008688 00:06:34.408 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.408 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4008688 /var/tmp/spdk2.sock 00:06:34.408 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:34.408 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4008688 /var/tmp/spdk2.sock 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4008688 /var/tmp/spdk2.sock 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4008688 ']' 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.409 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.409 [2024-07-14 21:07:31.092721] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:34.409 [2024-07-14 21:07:31.092813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008688 ] 00:06:34.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.409 [2024-07-14 21:07:31.186988] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4008534 has claimed it. 00:06:34.409 [2024-07-14 21:07:31.187032] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.976 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4008688) - No such process 00:06:34.976 ERROR: process (pid: 4008688) is no longer running 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4008534 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4008534 00:06:34.976 21:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.543 lslocks: write error 00:06:35.543 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4008534 00:06:35.543 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4008534 ']' 00:06:35.543 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4008534 00:06:35.543 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:35.543 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.543 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4008534 00:06:35.802 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.802 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.802 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4008534' 00:06:35.802 killing process with pid 4008534 00:06:35.802 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4008534 00:06:35.802 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4008534 00:06:36.061 00:06:36.061 real 0m2.004s 00:06:36.061 user 0m2.115s 00:06:36.061 sys 0m0.787s 00:06:36.061 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.061 21:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.061 ************************************ 00:06:36.061 END TEST locking_app_on_locked_coremask 00:06:36.061 ************************************ 00:06:36.061 21:07:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.061 21:07:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.061 21:07:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.061 21:07:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.061 ************************************ 00:06:36.061 START TEST locking_overlapped_coremask 00:06:36.061 ************************************ 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4009074 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4009074 /var/tmp/spdk.sock 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 4009074 ']' 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.061 21:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.061 [2024-07-14 21:07:32.859874] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:36.061 [2024-07-14 21:07:32.859956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009074 ] 00:06:36.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.061 [2024-07-14 21:07:32.929418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.321 [2024-07-14 21:07:32.970200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.321 [2024-07-14 21:07:32.970220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.321 [2024-07-14 21:07:32.970228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4009086 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4009086 /var/tmp/spdk2.sock 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4009086 /var/tmp/spdk2.sock 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4009086 /var/tmp/spdk2.sock 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 4009086 ']' 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.321 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.321 [2024-07-14 21:07:33.179781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:36.321 [2024-07-14 21:07:33.179844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009086 ] 00:06:36.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.578 [2024-07-14 21:07:33.273372] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4009074 has claimed it. 00:06:36.578 [2024-07-14 21:07:33.273409] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.147 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4009086) - No such process 00:06:37.147 ERROR: process (pid: 4009086) is no longer running 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4009074 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 4009074 ']' 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 4009074 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4009074 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4009074' 00:06:37.147 killing process with pid 4009074 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 4009074 00:06:37.147 21:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 4009074 00:06:37.406 00:06:37.406 real 0m1.346s 00:06:37.406 user 0m3.647s 00:06:37.406 sys 0m0.402s 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.406 ************************************ 00:06:37.406 END TEST locking_overlapped_coremask 00:06:37.406 ************************************ 00:06:37.406 21:07:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:37.406 21:07:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.406 21:07:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.406 21:07:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.406 ************************************ 00:06:37.406 START TEST locking_overlapped_coremask_via_rpc 00:06:37.406 ************************************ 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4009363 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4009363 /var/tmp/spdk.sock 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4009363 ']' 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.406 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.406 [2024-07-14 21:07:34.277126] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.406 [2024-07-14 21:07:34.277174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009363 ] 00:06:37.406 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.665 [2024-07-14 21:07:34.342838] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.665 [2024-07-14 21:07:34.342865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.665 [2024-07-14 21:07:34.380885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.665 [2024-07-14 21:07:34.380980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.665 [2024-07-14 21:07:34.380982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.665 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.665 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:37.665 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4009383 00:06:37.665 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4009383 /var/tmp/spdk2.sock 00:06:37.666 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.666 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4009383 ']' 00:06:37.666 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.666 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.666 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.666 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.666 21:07:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.925 [2024-07-14 21:07:34.585353] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.925 [2024-07-14 21:07:34.585438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009383 ] 00:06:37.925 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.925 [2024-07-14 21:07:34.676477] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.925 [2024-07-14 21:07:34.676510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.925 [2024-07-14 21:07:34.755971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.925 [2024-07-14 21:07:34.759486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.925 [2024-07-14 21:07:34.759487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.862 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.862 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:38.862 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.862 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.863 [2024-07-14 21:07:35.427513] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4009363 has claimed it. 00:06:38.863 request: 00:06:38.863 { 00:06:38.863 "method": "framework_enable_cpumask_locks", 00:06:38.863 "req_id": 1 00:06:38.863 } 00:06:38.863 Got JSON-RPC error response 00:06:38.863 response: 00:06:38.863 { 00:06:38.863 "code": -32603, 00:06:38.863 "message": "Failed to claim CPU core: 2" 00:06:38.863 } 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4009363 /var/tmp/spdk.sock 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4009363 ']' 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4009383 /var/tmp/spdk2.sock 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4009383 ']' 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.863 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.122 00:06:39.122 real 0m1.548s 00:06:39.122 user 0m0.676s 00:06:39.122 sys 0m0.177s 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.122 21:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.122 ************************************ 00:06:39.122 END TEST locking_overlapped_coremask_via_rpc 00:06:39.122 ************************************ 00:06:39.122 21:07:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.122 21:07:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4009363 ]] 00:06:39.122 21:07:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4009363 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4009363 ']' 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4009363 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4009363 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4009363' 00:06:39.122 killing process with pid 4009363 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 4009363 00:06:39.122 21:07:35 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 4009363 00:06:39.382 21:07:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4009383 ]] 00:06:39.382 21:07:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4009383 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4009383 ']' 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4009383 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4009383 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4009383' 00:06:39.382 killing process with pid 4009383 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 4009383 00:06:39.382 21:07:36 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 4009383 00:06:39.951 21:07:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.951 21:07:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.951 21:07:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4009363 ]] 00:06:39.951 21:07:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4009363 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4009363 ']' 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4009363 00:06:39.951 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4009363) - No such process 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 4009363 is not found' 00:06:39.951 Process with pid 4009363 is not found 00:06:39.951 21:07:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4009383 ]] 00:06:39.951 21:07:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4009383 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4009383 ']' 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4009383 00:06:39.951 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4009383) - No such process 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 4009383 is not found' 00:06:39.951 Process with pid 4009383 is not found 00:06:39.951 21:07:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.951 00:06:39.951 real 0m15.029s 00:06:39.951 user 0m24.588s 00:06:39.951 sys 0m5.936s 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.951 21:07:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.951 ************************************ 00:06:39.951 END TEST cpu_locks 00:06:39.951 ************************************ 00:06:39.951 00:06:39.951 real 0m39.164s 00:06:39.951 user 1m11.991s 00:06:39.951 sys 0m10.095s 00:06:39.951 21:07:36 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.951 21:07:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.951 ************************************ 00:06:39.951 END TEST event 00:06:39.951 ************************************ 00:06:39.951 21:07:36 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:39.951 21:07:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.951 21:07:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.951 21:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:39.951 ************************************ 00:06:39.951 START TEST thread 00:06:39.951 ************************************ 00:06:39.951 21:07:36 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:39.951 * Looking for test storage... 00:06:39.951 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:39.951 21:07:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.951 21:07:36 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:39.951 21:07:36 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.951 21:07:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.951 ************************************ 00:06:39.951 START TEST thread_poller_perf 00:06:39.951 ************************************ 00:06:39.951 21:07:36 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.951 [2024-07-14 21:07:36.835337] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:39.951 [2024-07-14 21:07:36.835424] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009761 ] 00:06:40.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.210 [2024-07-14 21:07:36.907742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.210 [2024-07-14 21:07:36.946571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.210 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.149 ====================================== 00:06:41.149 busy:2504450784 (cyc) 00:06:41.149 total_run_count: 856000 00:06:41.149 tsc_hz: 2500000000 (cyc) 00:06:41.149 ====================================== 00:06:41.149 poller_cost: 2925 (cyc), 1170 (nsec) 00:06:41.149 00:06:41.149 real 0m1.187s 00:06:41.149 user 0m1.093s 00:06:41.149 sys 0m0.090s 00:06:41.149 21:07:38 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.149 21:07:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.149 ************************************ 00:06:41.149 END TEST thread_poller_perf 00:06:41.149 ************************************ 00:06:41.149 21:07:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.149 21:07:38 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:41.149 21:07:38 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.149 21:07:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.409 ************************************ 00:06:41.409 START TEST thread_poller_perf 00:06:41.409 ************************************ 00:06:41.409 21:07:38 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.409 [2024-07-14 21:07:38.103984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.409 [2024-07-14 21:07:38.104091] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010039 ] 00:06:41.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.409 [2024-07-14 21:07:38.176146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.409 [2024-07-14 21:07:38.213766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.409 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:42.791 ====================================== 00:06:42.791 busy:2501352548 (cyc) 00:06:42.791 total_run_count: 14110000 00:06:42.791 tsc_hz: 2500000000 (cyc) 00:06:42.791 ====================================== 00:06:42.791 poller_cost: 177 (cyc), 70 (nsec) 00:06:42.791 00:06:42.791 real 0m1.183s 00:06:42.791 user 0m1.083s 00:06:42.791 sys 0m0.096s 00:06:42.791 21:07:39 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.791 21:07:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.791 ************************************ 00:06:42.791 END TEST thread_poller_perf 00:06:42.791 ************************************ 00:06:42.791 21:07:39 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:42.791 21:07:39 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:42.791 21:07:39 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.791 21:07:39 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.791 21:07:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.791 ************************************ 00:06:42.791 START TEST thread_spdk_lock 00:06:42.791 ************************************ 00:06:42.791 21:07:39 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:42.791 [2024-07-14 21:07:39.372221] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:42.791 [2024-07-14 21:07:39.372304] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010327 ] 00:06:42.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.791 [2024-07-14 21:07:39.443596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.791 [2024-07-14 21:07:39.483820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.791 [2024-07-14 21:07:39.483822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.360 [2024-07-14 21:07:39.975593] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:43.360 [2024-07-14 21:07:39.975629] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:43.360 [2024-07-14 21:07:39.975639] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x13107c0 00:06:43.360 [2024-07-14 21:07:39.976521] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:43.360 [2024-07-14 21:07:39.976624] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:43.360 [2024-07-14 21:07:39.976644] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:43.360 Starting test contend 00:06:43.360 Worker Delay Wait us Hold us Total us 00:06:43.360 0 3 176082 186624 362707 00:06:43.360 1 5 92992 288154 381146 00:06:43.360 PASS test contend 00:06:43.360 Starting test hold_by_poller 00:06:43.360 PASS test hold_by_poller 00:06:43.360 Starting test hold_by_message 00:06:43.360 PASS test hold_by_message 00:06:43.360 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:43.360 100014 assertions passed 00:06:43.360 0 assertions failed 00:06:43.360 00:06:43.360 real 0m0.675s 00:06:43.360 user 0m1.071s 00:06:43.360 sys 0m0.093s 00:06:43.360 21:07:40 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.360 21:07:40 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:43.360 ************************************ 00:06:43.360 END TEST thread_spdk_lock 00:06:43.360 ************************************ 00:06:43.360 00:06:43.360 real 0m3.375s 00:06:43.360 user 0m3.367s 00:06:43.360 sys 0m0.516s 00:06:43.360 21:07:40 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.360 21:07:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.360 ************************************ 00:06:43.360 END TEST thread 00:06:43.360 ************************************ 00:06:43.360 21:07:40 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:43.360 21:07:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.360 21:07:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.360 21:07:40 -- common/autotest_common.sh@10 -- # set +x 00:06:43.360 ************************************ 00:06:43.360 START TEST accel 00:06:43.360 ************************************ 00:06:43.360 21:07:40 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:43.360 * Looking for test storage... 00:06:43.360 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:43.360 21:07:40 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:43.360 21:07:40 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:43.360 21:07:40 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.360 21:07:40 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4010563 00:06:43.360 21:07:40 accel -- accel/accel.sh@63 -- # waitforlisten 4010563 00:06:43.360 21:07:40 accel -- common/autotest_common.sh@827 -- # '[' -z 4010563 ']' 00:06:43.360 21:07:40 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.360 21:07:40 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.360 21:07:40 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.360 21:07:40 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.360 21:07:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.360 21:07:40 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:43.360 21:07:40 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:43.360 21:07:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.360 21:07:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.360 21:07:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.360 21:07:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.360 21:07:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.360 21:07:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:43.360 21:07:40 accel -- accel/accel.sh@41 -- # jq -r . 00:06:43.360 [2024-07-14 21:07:40.258158] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:43.360 [2024-07-14 21:07:40.258232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010563 ] 00:06:43.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.619 [2024-07-14 21:07:40.325963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.619 [2024-07-14 21:07:40.364883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@860 -- # return 0 00:06:43.878 21:07:40 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:43.878 21:07:40 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:43.878 21:07:40 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:43.878 21:07:40 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:43.878 21:07:40 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:43.878 21:07:40 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.878 21:07:40 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.878 21:07:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.878 21:07:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.878 21:07:40 accel -- accel/accel.sh@75 -- # killprocess 4010563 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@946 -- # '[' -z 4010563 ']' 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@950 -- # kill -0 4010563 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@951 -- # uname 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4010563 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4010563' 00:06:43.878 killing process with pid 4010563 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@965 -- # kill 4010563 00:06:43.878 21:07:40 accel -- common/autotest_common.sh@970 -- # wait 4010563 00:06:44.137 21:07:40 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:44.137 21:07:40 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:44.137 21:07:40 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:44.137 21:07:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.137 21:07:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.137 21:07:40 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:44.137 21:07:40 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:44.137 21:07:41 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.137 21:07:41 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:44.396 21:07:41 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:44.396 21:07:41 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:44.397 21:07:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.397 21:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.397 ************************************ 00:06:44.397 START TEST accel_missing_filename 00:06:44.397 ************************************ 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.397 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:44.397 21:07:41 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:44.397 [2024-07-14 21:07:41.104216] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:44.397 [2024-07-14 21:07:41.104295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010693 ] 00:06:44.397 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.397 [2024-07-14 21:07:41.173093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.397 [2024-07-14 21:07:41.211811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.397 [2024-07-14 21:07:41.251627] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.656 [2024-07-14 21:07:41.311418] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:44.656 A filename is required. 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.656 00:06:44.656 real 0m0.288s 00:06:44.656 user 0m0.195s 00:06:44.656 sys 0m0.132s 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.656 21:07:41 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:44.656 ************************************ 00:06:44.656 END TEST accel_missing_filename 00:06:44.656 ************************************ 00:06:44.656 21:07:41 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:44.656 21:07:41 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:44.656 21:07:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.656 21:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.656 ************************************ 00:06:44.656 START TEST accel_compress_verify 00:06:44.656 ************************************ 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.656 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:44.656 21:07:41 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:44.656 [2024-07-14 21:07:41.463647] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:44.656 [2024-07-14 21:07:41.463715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010733 ] 00:06:44.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.656 [2024-07-14 21:07:41.523489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.916 [2024-07-14 21:07:41.561047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.916 [2024-07-14 21:07:41.600996] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.916 [2024-07-14 21:07:41.661394] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:44.916 00:06:44.916 Compression does not support the verify option, aborting. 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.916 00:06:44.916 real 0m0.271s 00:06:44.916 user 0m0.181s 00:06:44.916 sys 0m0.127s 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.916 21:07:41 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:44.916 ************************************ 00:06:44.916 END TEST accel_compress_verify 00:06:44.916 ************************************ 00:06:44.916 21:07:41 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:44.916 21:07:41 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:44.916 21:07:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.916 21:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.916 ************************************ 00:06:44.916 START TEST accel_wrong_workload 00:06:44.916 ************************************ 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.916 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:44.916 21:07:41 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:45.176 Unsupported workload type: foobar 00:06:45.177 [2024-07-14 21:07:41.823344] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:45.177 accel_perf options: 00:06:45.177 [-h help message] 00:06:45.177 [-q queue depth per core] 00:06:45.177 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.177 [-T number of threads per core 00:06:45.177 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.177 [-t time in seconds] 00:06:45.177 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.177 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:45.177 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.177 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.177 [-S for crc32c workload, use this seed value (default 0) 00:06:45.177 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.177 [-f for fill workload, use this BYTE value (default 255) 00:06:45.177 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.177 [-y verify result if this switch is on] 00:06:45.177 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.177 Can be used to spread operations across a wider range of memory. 00:06:45.177 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:45.177 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.177 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.177 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.177 00:06:45.177 real 0m0.028s 00:06:45.177 user 0m0.009s 00:06:45.177 sys 0m0.019s 00:06:45.177 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.177 21:07:41 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:45.177 ************************************ 00:06:45.177 END TEST accel_wrong_workload 00:06:45.177 ************************************ 00:06:45.177 Error: writing output failed: Broken pipe 00:06:45.177 21:07:41 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.177 21:07:41 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:45.177 21:07:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.177 21:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.177 ************************************ 00:06:45.177 START TEST accel_negative_buffers 00:06:45.177 ************************************ 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:45.177 21:07:41 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:45.177 -x option must be non-negative. 00:06:45.177 [2024-07-14 21:07:41.932174] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:45.177 accel_perf options: 00:06:45.177 [-h help message] 00:06:45.177 [-q queue depth per core] 00:06:45.177 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.177 [-T number of threads per core 00:06:45.177 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.177 [-t time in seconds] 00:06:45.177 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.177 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:45.177 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.177 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.177 [-S for crc32c workload, use this seed value (default 0) 00:06:45.177 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.177 [-f for fill workload, use this BYTE value (default 255) 00:06:45.177 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.177 [-y verify result if this switch is on] 00:06:45.177 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.177 Can be used to spread operations across a wider range of memory. 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.177 00:06:45.177 real 0m0.028s 00:06:45.177 user 0m0.012s 00:06:45.177 sys 0m0.016s 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.177 21:07:41 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:45.177 ************************************ 00:06:45.177 END TEST accel_negative_buffers 00:06:45.177 ************************************ 00:06:45.177 Error: writing output failed: Broken pipe 00:06:45.177 21:07:41 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:45.177 21:07:41 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:45.177 21:07:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.177 21:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.177 ************************************ 00:06:45.177 START TEST accel_crc32c 00:06:45.177 ************************************ 00:06:45.177 21:07:42 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:45.177 21:07:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:45.177 [2024-07-14 21:07:42.034311] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:45.177 [2024-07-14 21:07:42.034392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011028 ] 00:06:45.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.437 [2024-07-14 21:07:42.107160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.437 [2024-07-14 21:07:42.149239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.437 21:07:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:46.815 21:07:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.815 00:06:46.815 real 0m1.298s 00:06:46.815 user 0m1.160s 00:06:46.815 sys 0m0.143s 00:06:46.815 21:07:43 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.815 21:07:43 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:46.815 ************************************ 00:06:46.815 END TEST accel_crc32c 00:06:46.815 ************************************ 00:06:46.815 21:07:43 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:46.815 21:07:43 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.815 21:07:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.815 21:07:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.815 ************************************ 00:06:46.815 START TEST accel_crc32c_C2 00:06:46.815 ************************************ 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:46.815 [2024-07-14 21:07:43.402010] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:46.815 [2024-07-14 21:07:43.402090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011292 ] 00:06:46.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.815 [2024-07-14 21:07:43.472066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.815 [2024-07-14 21:07:43.509910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.815 21:07:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.193 00:06:48.193 real 0m1.288s 00:06:48.193 user 0m1.176s 00:06:48.193 sys 0m0.117s 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.193 21:07:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 ************************************ 00:06:48.193 END TEST accel_crc32c_C2 00:06:48.193 ************************************ 00:06:48.193 21:07:44 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:48.193 21:07:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:48.193 21:07:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.193 21:07:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 ************************************ 00:06:48.193 START TEST accel_copy 00:06:48.193 ************************************ 00:06:48.193 21:07:44 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:48.193 [2024-07-14 21:07:44.762122] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:48.193 [2024-07-14 21:07:44.762199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011484 ] 00:06:48.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.193 [2024-07-14 21:07:44.829996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.193 [2024-07-14 21:07:44.867456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.193 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.194 21:07:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:49.130 21:07:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.130 00:06:49.130 real 0m1.285s 00:06:49.130 user 0m1.161s 00:06:49.130 sys 0m0.129s 00:06:49.130 21:07:46 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.130 21:07:46 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:49.130 ************************************ 00:06:49.130 END TEST accel_copy 00:06:49.130 ************************************ 00:06:49.388 21:07:46 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.388 21:07:46 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:49.388 21:07:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.388 21:07:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.388 ************************************ 00:06:49.388 START TEST accel_fill 00:06:49.388 ************************************ 00:06:49.388 21:07:46 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.388 21:07:46 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:49.389 [2024-07-14 21:07:46.118556] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:49.389 [2024-07-14 21:07:46.118637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011677 ] 00:06:49.389 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.389 [2024-07-14 21:07:46.187981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.389 [2024-07-14 21:07:46.225275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.389 21:07:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:50.768 21:07:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.768 00:06:50.768 real 0m1.286s 00:06:50.768 user 0m1.160s 00:06:50.768 sys 0m0.131s 00:06:50.768 21:07:47 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.768 21:07:47 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:50.768 ************************************ 00:06:50.768 END TEST accel_fill 00:06:50.768 ************************************ 00:06:50.768 21:07:47 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:50.768 21:07:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:50.768 21:07:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.768 21:07:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.768 ************************************ 00:06:50.768 START TEST accel_copy_crc32c 00:06:50.768 ************************************ 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:50.768 [2024-07-14 21:07:47.469076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.768 [2024-07-14 21:07:47.469118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011922 ] 00:06:50.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.768 [2024-07-14 21:07:47.532944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.768 [2024-07-14 21:07:47.571543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.768 21:07:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.145 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.146 00:06:52.146 real 0m1.271s 00:06:52.146 user 0m1.156s 00:06:52.146 sys 0m0.120s 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.146 21:07:48 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:52.146 ************************************ 00:06:52.146 END TEST accel_copy_crc32c 00:06:52.146 ************************************ 00:06:52.146 21:07:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.146 21:07:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:52.146 21:07:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.146 21:07:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.146 ************************************ 00:06:52.146 START TEST accel_copy_crc32c_C2 00:06:52.146 ************************************ 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:52.146 [2024-07-14 21:07:48.827518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.146 [2024-07-14 21:07:48.827599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012208 ] 00:06:52.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.146 [2024-07-14 21:07:48.898015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.146 [2024-07-14 21:07:48.936590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.146 21:07:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.524 00:06:53.524 real 0m1.294s 00:06:53.524 user 0m1.159s 00:06:53.524 sys 0m0.140s 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.524 21:07:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:53.524 ************************************ 00:06:53.524 END TEST accel_copy_crc32c_C2 00:06:53.524 ************************************ 00:06:53.524 21:07:50 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:53.524 21:07:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:53.524 21:07:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.524 21:07:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.524 ************************************ 00:06:53.524 START TEST accel_dualcast 00:06:53.524 ************************************ 00:06:53.524 21:07:50 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:53.524 [2024-07-14 21:07:50.195181] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:53.524 [2024-07-14 21:07:50.195263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012487 ] 00:06:53.524 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.524 [2024-07-14 21:07:50.264635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.524 [2024-07-14 21:07:50.303273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.524 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.525 21:07:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:54.903 21:07:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.903 00:06:54.903 real 0m1.289s 00:06:54.903 user 0m1.166s 00:06:54.903 sys 0m0.128s 00:06:54.903 21:07:51 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.903 21:07:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:54.903 ************************************ 00:06:54.903 END TEST accel_dualcast 00:06:54.903 ************************************ 00:06:54.903 21:07:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:54.903 21:07:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:54.903 21:07:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.903 21:07:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.903 ************************************ 00:06:54.903 START TEST accel_compare 00:06:54.903 ************************************ 00:06:54.903 21:07:51 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:54.903 [2024-07-14 21:07:51.557176] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:54.903 [2024-07-14 21:07:51.557256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012774 ] 00:06:54.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.903 [2024-07-14 21:07:51.627703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.903 [2024-07-14 21:07:51.665228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:54.903 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.904 21:07:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.282 21:07:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.282 21:07:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:56.283 21:07:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.283 00:06:56.283 real 0m1.290s 00:06:56.283 user 0m1.164s 00:06:56.283 sys 0m0.130s 00:06:56.283 21:07:52 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.283 21:07:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:56.283 ************************************ 00:06:56.283 END TEST accel_compare 00:06:56.283 ************************************ 00:06:56.283 21:07:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:56.283 21:07:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.283 21:07:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.283 21:07:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.283 ************************************ 00:06:56.283 START TEST accel_xor 00:06:56.283 ************************************ 00:06:56.283 21:07:52 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:56.283 21:07:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:56.283 [2024-07-14 21:07:52.909917] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:56.283 [2024-07-14 21:07:52.909972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013053 ] 00:06:56.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.283 [2024-07-14 21:07:52.974512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.283 [2024-07-14 21:07:53.011872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 21:07:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:57.674 21:07:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.674 00:06:57.674 real 0m1.275s 00:06:57.674 user 0m1.154s 00:06:57.674 sys 0m0.125s 00:06:57.674 21:07:54 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.675 21:07:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:57.675 ************************************ 00:06:57.675 END TEST accel_xor 00:06:57.675 ************************************ 00:06:57.675 21:07:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:57.675 21:07:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:57.675 21:07:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.675 21:07:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.675 ************************************ 00:06:57.675 START TEST accel_xor 00:06:57.675 ************************************ 00:06:57.675 21:07:54 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:57.675 [2024-07-14 21:07:54.269973] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.675 [2024-07-14 21:07:54.270078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013334 ] 00:06:57.675 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.675 [2024-07-14 21:07:54.342324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.675 [2024-07-14 21:07:54.382370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.675 21:07:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:59.055 21:07:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.055 00:06:59.055 real 0m1.296s 00:06:59.055 user 0m1.163s 00:06:59.055 sys 0m0.138s 00:06:59.055 21:07:55 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.055 21:07:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 ************************************ 00:06:59.055 END TEST accel_xor 00:06:59.055 ************************************ 00:06:59.055 21:07:55 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:59.055 21:07:55 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:59.055 21:07:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.055 21:07:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.055 ************************************ 00:06:59.055 START TEST accel_dif_verify 00:06:59.055 ************************************ 00:06:59.055 21:07:55 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:59.055 [2024-07-14 21:07:55.634754] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.055 [2024-07-14 21:07:55.634834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013588 ] 00:06:59.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.055 [2024-07-14 21:07:55.703144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.055 [2024-07-14 21:07:55.741007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.055 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.056 21:07:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:00.435 21:07:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.435 00:07:00.435 real 0m1.288s 00:07:00.435 user 0m1.161s 00:07:00.435 sys 0m0.133s 00:07:00.435 21:07:56 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.435 21:07:56 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.435 ************************************ 00:07:00.435 END TEST accel_dif_verify 00:07:00.435 ************************************ 00:07:00.435 21:07:56 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:00.435 21:07:56 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:00.435 21:07:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.435 21:07:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.435 ************************************ 00:07:00.435 START TEST accel_dif_generate 00:07:00.435 ************************************ 00:07:00.435 21:07:56 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:00.435 21:07:56 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:00.435 [2024-07-14 21:07:56.997200] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:00.435 [2024-07-14 21:07:56.997279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013779 ] 00:07:00.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.435 [2024-07-14 21:07:57.067169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.435 [2024-07-14 21:07:57.104694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.435 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.436 21:07:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:01.430 21:07:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.430 00:07:01.430 real 0m1.288s 00:07:01.430 user 0m1.157s 00:07:01.430 sys 0m0.134s 00:07:01.430 21:07:58 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.430 21:07:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:01.430 ************************************ 00:07:01.430 END TEST accel_dif_generate 00:07:01.430 ************************************ 00:07:01.430 21:07:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:01.430 21:07:58 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:01.430 21:07:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.430 21:07:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.689 ************************************ 00:07:01.689 START TEST accel_dif_generate_copy 00:07:01.689 ************************************ 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:01.689 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:01.690 [2024-07-14 21:07:58.361635] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.690 [2024-07-14 21:07:58.361715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013971 ] 00:07:01.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.690 [2024-07-14 21:07:58.430646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.690 [2024-07-14 21:07:58.468746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.690 21:07:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.068 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.068 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.068 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.068 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.068 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.068 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.068 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.069 00:07:03.069 real 0m1.288s 00:07:03.069 user 0m1.160s 00:07:03.069 sys 0m0.131s 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.069 21:07:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.069 ************************************ 00:07:03.069 END TEST accel_dif_generate_copy 00:07:03.069 ************************************ 00:07:03.069 21:07:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:03.069 21:07:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.069 21:07:59 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:03.069 21:07:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.069 21:07:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.069 ************************************ 00:07:03.069 START TEST accel_comp 00:07:03.069 ************************************ 00:07:03.069 21:07:59 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:03.069 [2024-07-14 21:07:59.724847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:03.069 [2024-07-14 21:07:59.724927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014231 ] 00:07:03.069 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.069 [2024-07-14 21:07:59.793229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.069 [2024-07-14 21:07:59.830071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.069 21:07:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:04.448 21:08:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.448 00:07:04.448 real 0m1.288s 00:07:04.448 user 0m1.166s 00:07:04.448 sys 0m0.125s 00:07:04.448 21:08:00 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.448 21:08:00 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:04.448 ************************************ 00:07:04.448 END TEST accel_comp 00:07:04.448 ************************************ 00:07:04.448 21:08:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:04.448 21:08:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:04.448 21:08:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.448 21:08:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.448 ************************************ 00:07:04.448 START TEST accel_decomp 00:07:04.448 ************************************ 00:07:04.448 21:08:01 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:04.448 [2024-07-14 21:08:01.083252] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.448 [2024-07-14 21:08:01.083331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014516 ] 00:07:04.448 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.448 [2024-07-14 21:08:01.150668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.448 [2024-07-14 21:08:01.188760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.448 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 21:08:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.827 21:08:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.827 00:07:05.827 real 0m1.288s 00:07:05.827 user 0m1.162s 00:07:05.827 sys 0m0.128s 00:07:05.827 21:08:02 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.827 21:08:02 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:05.827 ************************************ 00:07:05.827 END TEST accel_decomp 00:07:05.827 ************************************ 00:07:05.827 21:08:02 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.827 21:08:02 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:05.827 21:08:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.827 21:08:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.827 ************************************ 00:07:05.827 START TEST accel_decmop_full 00:07:05.827 ************************************ 00:07:05.827 21:08:02 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.827 21:08:02 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:05.827 21:08:02 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:05.827 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:05.828 [2024-07-14 21:08:02.441144] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.828 [2024-07-14 21:08:02.441223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014799 ] 00:07:05.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.828 [2024-07-14 21:08:02.508388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.828 [2024-07-14 21:08:02.545011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.828 21:08:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.208 21:08:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.208 21:08:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.208 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.208 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.208 21:08:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.208 21:08:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.209 21:08:03 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.209 00:07:07.209 real 0m1.290s 00:07:07.209 user 0m1.166s 00:07:07.209 sys 0m0.127s 00:07:07.209 21:08:03 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.209 21:08:03 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:07.209 ************************************ 00:07:07.209 END TEST accel_decmop_full 00:07:07.209 ************************************ 00:07:07.209 21:08:03 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.209 21:08:03 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:07.209 21:08:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.209 21:08:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.209 ************************************ 00:07:07.209 START TEST accel_decomp_mcore 00:07:07.209 ************************************ 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:07.209 [2024-07-14 21:08:03.810162] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:07.209 [2024-07-14 21:08:03.810242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015088 ] 00:07:07.209 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.209 [2024-07-14 21:08:03.879513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.209 [2024-07-14 21:08:03.919927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.209 [2024-07-14 21:08:03.920022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.209 [2024-07-14 21:08:03.920105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.209 [2024-07-14 21:08:03.920107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.209 21:08:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.589 00:07:08.589 real 0m1.307s 00:07:08.589 user 0m4.504s 00:07:08.589 sys 0m0.141s 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.589 21:08:05 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:08.589 ************************************ 00:07:08.589 END TEST accel_decomp_mcore 00:07:08.589 ************************************ 00:07:08.589 21:08:05 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.589 21:08:05 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:08.589 21:08:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.589 21:08:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.589 ************************************ 00:07:08.589 START TEST accel_decomp_full_mcore 00:07:08.589 ************************************ 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:08.589 [2024-07-14 21:08:05.199686] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.589 [2024-07-14 21:08:05.199774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015370 ] 00:07:08.589 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.589 [2024-07-14 21:08:05.268340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.589 [2024-07-14 21:08:05.310937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.589 [2024-07-14 21:08:05.311034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.589 [2024-07-14 21:08:05.311114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.589 [2024-07-14 21:08:05.311116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.589 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.590 21:08:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.968 00:07:09.968 real 0m1.320s 00:07:09.968 user 0m4.550s 00:07:09.968 sys 0m0.137s 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.968 21:08:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:09.968 ************************************ 00:07:09.968 END TEST accel_decomp_full_mcore 00:07:09.968 ************************************ 00:07:09.968 21:08:06 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.968 21:08:06 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:09.968 21:08:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.968 21:08:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.968 ************************************ 00:07:09.968 START TEST accel_decomp_mthread 00:07:09.968 ************************************ 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:09.968 [2024-07-14 21:08:06.603849] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:09.968 [2024-07-14 21:08:06.603956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015654 ] 00:07:09.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.968 [2024-07-14 21:08:06.674608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.968 [2024-07-14 21:08:06.714762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.968 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.969 21:08:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.354 00:07:11.354 real 0m1.306s 00:07:11.354 user 0m1.188s 00:07:11.354 sys 0m0.133s 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.354 21:08:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:11.354 ************************************ 00:07:11.354 END TEST accel_decomp_mthread 00:07:11.354 ************************************ 00:07:11.354 21:08:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.354 21:08:07 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:11.354 21:08:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.354 21:08:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.354 ************************************ 00:07:11.354 START TEST accel_decomp_full_mthread 00:07:11.354 ************************************ 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:11.354 21:08:07 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:11.354 [2024-07-14 21:08:07.994020] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:11.354 [2024-07-14 21:08:07.994103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015902 ] 00:07:11.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.354 [2024-07-14 21:08:08.066437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.354 [2024-07-14 21:08:08.107933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.355 21:08:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.734 00:07:12.734 real 0m1.327s 00:07:12.734 user 0m1.194s 00:07:12.734 sys 0m0.147s 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.734 21:08:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:12.734 ************************************ 00:07:12.734 END TEST accel_decomp_full_mthread 00:07:12.734 ************************************ 00:07:12.734 21:08:09 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:12.734 21:08:09 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:12.734 21:08:09 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:12.734 21:08:09 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:12.734 21:08:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.734 21:08:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.734 21:08:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.734 21:08:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.734 21:08:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.734 21:08:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.734 21:08:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.734 21:08:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:12.734 21:08:09 accel -- accel/accel.sh@41 -- # jq -r . 00:07:12.734 ************************************ 00:07:12.734 START TEST accel_dif_functional_tests 00:07:12.734 ************************************ 00:07:12.734 21:08:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:12.734 [2024-07-14 21:08:09.402105] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:12.734 [2024-07-14 21:08:09.402186] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016136 ] 00:07:12.734 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.734 [2024-07-14 21:08:09.472185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.734 [2024-07-14 21:08:09.512787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.734 [2024-07-14 21:08:09.512878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.734 [2024-07-14 21:08:09.512880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.734 00:07:12.734 00:07:12.734 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.734 http://cunit.sourceforge.net/ 00:07:12.734 00:07:12.734 00:07:12.734 Suite: accel_dif 00:07:12.734 Test: verify: DIF generated, GUARD check ...passed 00:07:12.734 Test: verify: DIF generated, APPTAG check ...passed 00:07:12.734 Test: verify: DIF generated, REFTAG check ...passed 00:07:12.734 Test: verify: DIF not generated, GUARD check ...[2024-07-14 21:08:09.575784] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.734 passed 00:07:12.734 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 21:08:09.575839] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.734 passed 00:07:12.734 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 21:08:09.575864] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.734 passed 00:07:12.734 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:12.734 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 21:08:09.575912] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:12.734 passed 00:07:12.734 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:12.734 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:12.734 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:12.734 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 21:08:09.576011] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:12.734 passed 00:07:12.734 Test: verify copy: DIF generated, GUARD check ...passed 00:07:12.734 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:12.734 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:12.734 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 21:08:09.576117] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.734 passed 00:07:12.734 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 21:08:09.576144] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.734 passed 00:07:12.734 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 21:08:09.576169] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.734 passed 00:07:12.734 Test: generate copy: DIF generated, GUARD check ...passed 00:07:12.734 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:12.734 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:12.734 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:12.734 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:12.734 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:12.734 Test: generate copy: iovecs-len validate ...[2024-07-14 21:08:09.576333] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:12.734 passed 00:07:12.734 Test: generate copy: buffer alignment validate ...passed 00:07:12.734 00:07:12.734 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.734 suites 1 1 n/a 0 0 00:07:12.734 tests 26 26 26 0 0 00:07:12.734 asserts 115 115 115 0 n/a 00:07:12.734 00:07:12.734 Elapsed time = 0.000 seconds 00:07:12.994 00:07:12.994 real 0m0.349s 00:07:12.994 user 0m0.533s 00:07:12.994 sys 0m0.157s 00:07:12.994 21:08:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.994 21:08:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:12.994 ************************************ 00:07:12.994 END TEST accel_dif_functional_tests 00:07:12.994 ************************************ 00:07:12.994 00:07:12.994 real 0m29.622s 00:07:12.994 user 0m32.494s 00:07:12.994 sys 0m4.889s 00:07:12.994 21:08:09 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.994 21:08:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.994 ************************************ 00:07:12.994 END TEST accel 00:07:12.994 ************************************ 00:07:12.994 21:08:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:12.994 21:08:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:12.994 21:08:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.994 21:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.994 ************************************ 00:07:12.994 START TEST accel_rpc 00:07:12.994 ************************************ 00:07:12.994 21:08:09 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:13.256 * Looking for test storage... 00:07:13.256 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:13.256 21:08:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.256 21:08:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4016298 00:07:13.256 21:08:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4016298 00:07:13.256 21:08:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:13.256 21:08:09 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 4016298 ']' 00:07:13.256 21:08:09 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.256 21:08:09 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.256 21:08:09 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.256 21:08:09 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.256 21:08:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.256 [2024-07-14 21:08:09.987834] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:13.256 [2024-07-14 21:08:09.987911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016298 ] 00:07:13.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.256 [2024-07-14 21:08:10.057840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.256 [2024-07-14 21:08:10.098384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.256 21:08:10 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.256 21:08:10 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:13.256 21:08:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:13.256 21:08:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:13.256 21:08:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:13.256 21:08:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:13.256 21:08:10 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:13.256 21:08:10 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:13.256 21:08:10 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.256 21:08:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.515 ************************************ 00:07:13.515 START TEST accel_assign_opcode 00:07:13.515 ************************************ 00:07:13.515 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:13.515 21:08:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:13.515 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.515 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.515 [2024-07-14 21:08:10.186971] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:13.515 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.515 21:08:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 [2024-07-14 21:08:10.194978] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.516 software 00:07:13.516 00:07:13.516 real 0m0.220s 00:07:13.516 user 0m0.044s 00:07:13.516 sys 0m0.013s 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.516 21:08:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:13.516 ************************************ 00:07:13.516 END TEST accel_assign_opcode 00:07:13.516 ************************************ 00:07:13.775 21:08:10 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4016298 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 4016298 ']' 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 4016298 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4016298 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4016298' 00:07:13.775 killing process with pid 4016298 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@965 -- # kill 4016298 00:07:13.775 21:08:10 accel_rpc -- common/autotest_common.sh@970 -- # wait 4016298 00:07:14.034 00:07:14.034 real 0m0.930s 00:07:14.034 user 0m0.842s 00:07:14.034 sys 0m0.448s 00:07:14.034 21:08:10 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.034 21:08:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.034 ************************************ 00:07:14.034 END TEST accel_rpc 00:07:14.034 ************************************ 00:07:14.034 21:08:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:14.034 21:08:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.034 21:08:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.034 21:08:10 -- common/autotest_common.sh@10 -- # set +x 00:07:14.034 ************************************ 00:07:14.034 START TEST app_cmdline 00:07:14.034 ************************************ 00:07:14.034 21:08:10 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:14.292 * Looking for test storage... 00:07:14.292 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:14.292 21:08:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:14.292 21:08:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4016565 00:07:14.292 21:08:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4016565 00:07:14.292 21:08:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:14.292 21:08:10 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 4016565 ']' 00:07:14.292 21:08:10 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.292 21:08:10 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.292 21:08:10 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.292 21:08:10 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.292 21:08:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.292 [2024-07-14 21:08:10.994741] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:14.292 [2024-07-14 21:08:10.994807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016565 ] 00:07:14.292 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.292 [2024-07-14 21:08:11.061886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.292 [2024-07-14 21:08:11.100935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.551 21:08:11 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.551 21:08:11 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:14.551 21:08:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:14.551 { 00:07:14.551 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:14.551 "fields": { 00:07:14.551 "major": 24, 00:07:14.551 "minor": 5, 00:07:14.551 "patch": 1, 00:07:14.551 "suffix": "-pre", 00:07:14.551 "commit": "5fa2f5086" 00:07:14.551 } 00:07:14.551 } 00:07:14.810 21:08:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.811 request: 00:07:14.811 { 00:07:14.811 "method": "env_dpdk_get_mem_stats", 00:07:14.811 "req_id": 1 00:07:14.811 } 00:07:14.811 Got JSON-RPC error response 00:07:14.811 response: 00:07:14.811 { 00:07:14.811 "code": -32601, 00:07:14.811 "message": "Method not found" 00:07:14.811 } 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.811 21:08:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4016565 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 4016565 ']' 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 4016565 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:14.811 21:08:11 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4016565 00:07:15.069 21:08:11 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:15.069 21:08:11 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:15.069 21:08:11 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4016565' 00:07:15.069 killing process with pid 4016565 00:07:15.069 21:08:11 app_cmdline -- common/autotest_common.sh@965 -- # kill 4016565 00:07:15.069 21:08:11 app_cmdline -- common/autotest_common.sh@970 -- # wait 4016565 00:07:15.328 00:07:15.328 real 0m1.162s 00:07:15.328 user 0m1.356s 00:07:15.328 sys 0m0.437s 00:07:15.328 21:08:12 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.328 21:08:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.328 ************************************ 00:07:15.328 END TEST app_cmdline 00:07:15.328 ************************************ 00:07:15.328 21:08:12 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:15.328 21:08:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.328 21:08:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.328 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.328 ************************************ 00:07:15.328 START TEST version 00:07:15.328 ************************************ 00:07:15.328 21:08:12 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:15.328 * Looking for test storage... 00:07:15.328 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.328 21:08:12 version -- app/version.sh@17 -- # get_header_version major 00:07:15.328 21:08:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.328 21:08:12 version -- app/version.sh@14 -- # cut -f2 00:07:15.328 21:08:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.328 21:08:12 version -- app/version.sh@17 -- # major=24 00:07:15.328 21:08:12 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.587 21:08:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.587 21:08:12 version -- app/version.sh@14 -- # cut -f2 00:07:15.587 21:08:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.587 21:08:12 version -- app/version.sh@18 -- # minor=5 00:07:15.587 21:08:12 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.587 21:08:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.587 21:08:12 version -- app/version.sh@14 -- # cut -f2 00:07:15.587 21:08:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.587 21:08:12 version -- app/version.sh@19 -- # patch=1 00:07:15.587 21:08:12 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.587 21:08:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:15.587 21:08:12 version -- app/version.sh@14 -- # cut -f2 00:07:15.587 21:08:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.587 21:08:12 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.587 21:08:12 version -- app/version.sh@22 -- # version=24.5 00:07:15.588 21:08:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.588 21:08:12 version -- app/version.sh@25 -- # version=24.5.1 00:07:15.588 21:08:12 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:15.588 21:08:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:15.588 21:08:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.588 21:08:12 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:15.588 21:08:12 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:15.588 00:07:15.588 real 0m0.190s 00:07:15.588 user 0m0.087s 00:07:15.588 sys 0m0.152s 00:07:15.588 21:08:12 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.588 21:08:12 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.588 ************************************ 00:07:15.588 END TEST version 00:07:15.588 ************************************ 00:07:15.588 21:08:12 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@198 -- # uname -s 00:07:15.588 21:08:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:15.588 21:08:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.588 21:08:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.588 21:08:12 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:15.588 21:08:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.588 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.588 21:08:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:15.588 21:08:12 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:07:15.588 21:08:12 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:15.588 21:08:12 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:07:15.588 21:08:12 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:15.588 21:08:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.588 21:08:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.588 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.588 ************************************ 00:07:15.588 START TEST llvm_fuzz 00:07:15.588 ************************************ 00:07:15.588 21:08:12 llvm_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:15.847 * Looking for test storage... 00:07:15.847 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.847 21:08:12 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.847 21:08:12 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:15.847 ************************************ 00:07:15.847 START TEST nvmf_fuzz 00:07:15.847 ************************************ 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:15.847 * Looking for test storage... 00:07:15.847 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:15.847 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:15.848 #define SPDK_CONFIG_H 00:07:15.848 #define SPDK_CONFIG_APPS 1 00:07:15.848 #define SPDK_CONFIG_ARCH native 00:07:15.848 #undef SPDK_CONFIG_ASAN 00:07:15.848 #undef SPDK_CONFIG_AVAHI 00:07:15.848 #undef SPDK_CONFIG_CET 00:07:15.848 #define SPDK_CONFIG_COVERAGE 1 00:07:15.848 #define SPDK_CONFIG_CROSS_PREFIX 00:07:15.848 #undef SPDK_CONFIG_CRYPTO 00:07:15.848 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:15.848 #undef SPDK_CONFIG_CUSTOMOCF 00:07:15.848 #undef SPDK_CONFIG_DAOS 00:07:15.848 #define SPDK_CONFIG_DAOS_DIR 00:07:15.848 #define SPDK_CONFIG_DEBUG 1 00:07:15.848 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:15.848 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:15.848 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:15.848 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:15.848 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:15.848 #undef SPDK_CONFIG_DPDK_UADK 00:07:15.848 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:15.848 #define SPDK_CONFIG_EXAMPLES 1 00:07:15.848 #undef SPDK_CONFIG_FC 00:07:15.848 #define SPDK_CONFIG_FC_PATH 00:07:15.848 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:15.848 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:15.848 #undef SPDK_CONFIG_FUSE 00:07:15.848 #define SPDK_CONFIG_FUZZER 1 00:07:15.848 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:15.848 #undef SPDK_CONFIG_GOLANG 00:07:15.848 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:15.848 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:15.848 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:15.848 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:15.848 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:15.848 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:15.848 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:15.848 #define SPDK_CONFIG_IDXD 1 00:07:15.848 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:15.848 #undef SPDK_CONFIG_IPSEC_MB 00:07:15.848 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:15.848 #define SPDK_CONFIG_ISAL 1 00:07:15.848 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:15.848 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:15.848 #define SPDK_CONFIG_LIBDIR 00:07:15.848 #undef SPDK_CONFIG_LTO 00:07:15.848 #define SPDK_CONFIG_MAX_LCORES 00:07:15.848 #define SPDK_CONFIG_NVME_CUSE 1 00:07:15.848 #undef SPDK_CONFIG_OCF 00:07:15.848 #define SPDK_CONFIG_OCF_PATH 00:07:15.848 #define SPDK_CONFIG_OPENSSL_PATH 00:07:15.848 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:15.848 #define SPDK_CONFIG_PGO_DIR 00:07:15.848 #undef SPDK_CONFIG_PGO_USE 00:07:15.848 #define SPDK_CONFIG_PREFIX /usr/local 00:07:15.848 #undef SPDK_CONFIG_RAID5F 00:07:15.848 #undef SPDK_CONFIG_RBD 00:07:15.848 #define SPDK_CONFIG_RDMA 1 00:07:15.848 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:15.848 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:15.848 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:15.848 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:15.848 #undef SPDK_CONFIG_SHARED 00:07:15.848 #undef SPDK_CONFIG_SMA 00:07:15.848 #define SPDK_CONFIG_TESTS 1 00:07:15.848 #undef SPDK_CONFIG_TSAN 00:07:15.848 #define SPDK_CONFIG_UBLK 1 00:07:15.848 #define SPDK_CONFIG_UBSAN 1 00:07:15.848 #undef SPDK_CONFIG_UNIT_TESTS 00:07:15.848 #undef SPDK_CONFIG_URING 00:07:15.848 #define SPDK_CONFIG_URING_PATH 00:07:15.848 #undef SPDK_CONFIG_URING_ZNS 00:07:15.848 #undef SPDK_CONFIG_USDT 00:07:15.848 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:15.848 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:15.848 #define SPDK_CONFIG_VFIO_USER 1 00:07:15.848 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:15.848 #define SPDK_CONFIG_VHOST 1 00:07:15.848 #define SPDK_CONFIG_VIRTIO 1 00:07:15.848 #undef SPDK_CONFIG_VTUNE 00:07:15.848 #define SPDK_CONFIG_VTUNE_DIR 00:07:15.848 #define SPDK_CONFIG_WERROR 1 00:07:15.848 #define SPDK_CONFIG_WPDK_DIR 00:07:15.848 #undef SPDK_CONFIG_XNVME 00:07:15.848 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:15.848 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:15.849 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:15.849 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@57 -- # : 1 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@61 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # : 1 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # : 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # : 1 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # : 1 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # : 1 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # : 0 00:07:16.109 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # : true 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # : 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@166 -- # : 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # : 0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # cat 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:16.110 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # [[ -z 4017050 ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # kill -0 4017050 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.QQKcE8 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.QQKcE8/tests/nvmf /tmp/spdk.QQKcE8 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # df -T 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=954408960 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4330020864 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52895952896 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742317568 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8846364672 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866448384 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342484992 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348465152 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5980160 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870433792 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=724992 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:16.111 * Looking for test storage... 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@373 -- # target_space=52895952896 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # new_size=11060957184 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.111 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # return 0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.111 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.112 21:08:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:16.112 [2024-07-14 21:08:12.918027] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:16.112 [2024-07-14 21:08:12.918101] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017094 ] 00:07:16.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.370 [2024-07-14 21:08:13.182096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.371 [2024-07-14 21:08:13.213257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.371 [2024-07-14 21:08:13.265433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.629 [2024-07-14 21:08:13.281759] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:16.629 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.629 INFO: Seed: 693659591 00:07:16.629 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:16.629 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:16.629 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:16.629 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.629 #2 INITED exec/s: 0 rss: 63Mb 00:07:16.629 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.629 This may also happen if the target rejected all inputs we tried so far 00:07:16.629 [2024-07-14 21:08:13.326357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.629 [2024-07-14 21:08:13.326394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.886 NEW_FUNC[1/690]: 0x4939b0 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:16.886 NEW_FUNC[2/690]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:16.886 #16 NEW cov: 11764 ft: 11744 corp: 2/66b lim: 320 exec/s: 0 rss: 69Mb L: 65/65 MS: 4 CopyPart-ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:07:16.886 [2024-07-14 21:08:13.667208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:16.886 [2024-07-14 21:08:13.667252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.886 NEW_FUNC[1/2]: 0x139a840 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2038 00:07:16.886 NEW_FUNC[2/2]: 0x17c3c20 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:07:16.886 #21 NEW cov: 11948 ft: 12351 corp: 3/157b lim: 320 exec/s: 0 rss: 69Mb L: 91/91 MS: 5 CopyPart-ShuffleBytes-CrossOver-ChangeBit-InsertRepeatedBytes- 00:07:16.886 [2024-07-14 21:08:13.727264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.886 [2024-07-14 21:08:13.727301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.144 #27 NEW cov: 11954 ft: 12544 corp: 4/222b lim: 320 exec/s: 0 rss: 69Mb L: 65/91 MS: 1 ChangeBit- 00:07:17.144 [2024-07-14 21:08:13.807421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.144 [2024-07-14 21:08:13.807463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.144 #28 NEW cov: 12039 ft: 12883 corp: 5/287b lim: 320 exec/s: 0 rss: 69Mb L: 65/91 MS: 1 ShuffleBytes- 00:07:17.144 [2024-07-14 21:08:13.887681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:17.144 [2024-07-14 21:08:13.887716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.144 #29 NEW cov: 12039 ft: 13029 corp: 6/378b lim: 320 exec/s: 0 rss: 70Mb L: 91/91 MS: 1 ShuffleBytes- 00:07:17.144 [2024-07-14 21:08:13.967823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:17.144 [2024-07-14 21:08:13.967856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.144 #35 NEW cov: 12039 ft: 13209 corp: 7/469b lim: 320 exec/s: 0 rss: 70Mb L: 91/91 MS: 1 ChangeByte- 00:07:17.402 [2024-07-14 21:08:14.048048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:2e2e2e2e cdw11:2e2e2e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e2e2e2effffffff 00:07:17.402 [2024-07-14 21:08:14.048081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.402 #36 NEW cov: 12039 ft: 13276 corp: 8/592b lim: 320 exec/s: 0 rss: 70Mb L: 123/123 MS: 1 InsertRepeatedBytes- 00:07:17.402 [2024-07-14 21:08:14.128283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.402 [2024-07-14 21:08:14.128315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.402 #37 NEW cov: 12039 ft: 13310 corp: 9/657b lim: 320 exec/s: 0 rss: 70Mb L: 65/123 MS: 1 ChangeBit- 00:07:17.402 [2024-07-14 21:08:14.208440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:17.402 [2024-07-14 21:08:14.208476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.402 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:17.402 #38 NEW cov: 12062 ft: 13380 corp: 10/726b lim: 320 exec/s: 0 rss: 70Mb L: 69/123 MS: 1 EraseBytes- 00:07:17.403 [2024-07-14 21:08:14.258666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (cd) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.403 [2024-07-14 21:08:14.258696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.403 [2024-07-14 21:08:14.258726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.403 [2024-07-14 21:08:14.258740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.403 NEW_FUNC[1/1]: 0x17c4780 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:17.403 #42 NEW cov: 12076 ft: 13935 corp: 11/854b lim: 320 exec/s: 0 rss: 70Mb L: 128/128 MS: 4 InsertByte-ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:07:17.661 [2024-07-14 21:08:14.318740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.661 [2024-07-14 21:08:14.318770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.661 #43 NEW cov: 12076 ft: 13966 corp: 12/943b lim: 320 exec/s: 43 rss: 70Mb L: 89/128 MS: 1 CopyPart- 00:07:17.661 [2024-07-14 21:08:14.368880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:17.661 [2024-07-14 21:08:14.368911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.661 #44 NEW cov: 12076 ft: 14002 corp: 13/1034b lim: 320 exec/s: 44 rss: 70Mb L: 91/128 MS: 1 ShuffleBytes- 00:07:17.661 [2024-07-14 21:08:14.419003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.662 [2024-07-14 21:08:14.419032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.662 #45 NEW cov: 12076 ft: 14016 corp: 14/1120b lim: 320 exec/s: 45 rss: 70Mb L: 86/128 MS: 1 EraseBytes- 00:07:17.662 [2024-07-14 21:08:14.499195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:fdffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:17.662 [2024-07-14 21:08:14.499225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.662 #46 NEW cov: 12076 ft: 14098 corp: 15/1189b lim: 320 exec/s: 46 rss: 70Mb L: 69/128 MS: 1 ChangeBit- 00:07:17.920 [2024-07-14 21:08:14.579380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:2e2e2e2e cdw11:2e2e2e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e2e2e2effffffff 00:07:17.920 [2024-07-14 21:08:14.579411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.921 #47 NEW cov: 12076 ft: 14132 corp: 16/1312b lim: 320 exec/s: 47 rss: 70Mb L: 123/128 MS: 1 CopyPart- 00:07:17.921 [2024-07-14 21:08:14.659629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:17.921 [2024-07-14 21:08:14.659659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.921 #48 NEW cov: 12076 ft: 14150 corp: 17/1403b lim: 320 exec/s: 48 rss: 70Mb L: 91/128 MS: 1 ChangeBinInt- 00:07:17.921 [2024-07-14 21:08:14.729857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada 00:07:17.921 [2024-07-14 21:08:14.729887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.921 [2024-07-14 21:08:14.729920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (da) qid:0 cid:5 nsid:dadadada cdw10:dadadada cdw11:dadadada SGL TRANSPORT DATA BLOCK TRANSPORT 0xdadadadadadadada 00:07:17.921 [2024-07-14 21:08:14.729935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.921 #49 NEW cov: 12076 ft: 14162 corp: 18/1592b lim: 320 exec/s: 49 rss: 70Mb L: 189/189 MS: 1 InsertRepeatedBytes- 00:07:17.921 [2024-07-14 21:08:14.779929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:17.921 [2024-07-14 21:08:14.779959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.921 #50 NEW cov: 12076 ft: 14194 corp: 19/1683b lim: 320 exec/s: 50 rss: 70Mb L: 91/189 MS: 1 ChangeBit- 00:07:18.180 [2024-07-14 21:08:14.830061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.180 [2024-07-14 21:08:14.830091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.180 #51 NEW cov: 12076 ft: 14202 corp: 20/1774b lim: 320 exec/s: 51 rss: 70Mb L: 91/189 MS: 1 CopyPart- 00:07:18.180 [2024-07-14 21:08:14.910301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.180 [2024-07-14 21:08:14.910332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.180 #54 NEW cov: 12076 ft: 14236 corp: 21/1898b lim: 320 exec/s: 54 rss: 70Mb L: 124/189 MS: 3 EraseBytes-InsertByte-CrossOver- 00:07:18.180 [2024-07-14 21:08:14.961252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.180 [2024-07-14 21:08:14.961280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.180 [2024-07-14 21:08:14.961341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:08080808 cdw11:08080808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x808080808080808 00:07:18.180 [2024-07-14 21:08:14.961355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.180 NEW_FUNC[1/3]: 0x1190a40 in nvmf_ctrlr_abort /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3387 00:07:18.180 NEW_FUNC[2/3]: 0x11e6770 in nvmf_ctrlr_abort_on_pg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3363 00:07:18.180 #55 NEW cov: 12123 ft: 14587 corp: 22/2096b lim: 320 exec/s: 55 rss: 70Mb L: 198/198 MS: 1 InsertRepeatedBytes- 00:07:18.180 [2024-07-14 21:08:15.001362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.180 [2024-07-14 21:08:15.001392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.180 [2024-07-14 21:08:15.001460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:08080808 cdw11:08080808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x808080808080808 00:07:18.180 [2024-07-14 21:08:15.001475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.180 #56 NEW cov: 12123 ft: 14627 corp: 23/2302b lim: 320 exec/s: 56 rss: 70Mb L: 206/206 MS: 1 InsertRepeatedBytes- 00:07:18.180 [2024-07-14 21:08:15.051315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffff 00:07:18.180 [2024-07-14 21:08:15.051341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.180 #58 NEW cov: 12123 ft: 14696 corp: 24/2421b lim: 320 exec/s: 58 rss: 70Mb L: 119/206 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:18.438 [2024-07-14 21:08:15.091653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.438 [2024-07-14 21:08:15.091680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.438 [2024-07-14 21:08:15.091742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:08082a08 cdw11:08080808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x808080808080808 00:07:18.438 [2024-07-14 21:08:15.091757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.438 #59 NEW cov: 12123 ft: 14702 corp: 25/2620b lim: 320 exec/s: 59 rss: 70Mb L: 199/206 MS: 1 InsertByte- 00:07:18.438 [2024-07-14 21:08:15.131499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:fffff8ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.438 [2024-07-14 21:08:15.131524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.438 #60 NEW cov: 12123 ft: 14741 corp: 26/2711b lim: 320 exec/s: 60 rss: 70Mb L: 91/206 MS: 1 ChangeBinInt- 00:07:18.438 [2024-07-14 21:08:15.171666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.438 [2024-07-14 21:08:15.171691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.438 #61 NEW cov: 12123 ft: 14755 corp: 27/2802b lim: 320 exec/s: 61 rss: 70Mb L: 91/206 MS: 1 ShuffleBytes- 00:07:18.438 [2024-07-14 21:08:15.221792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.438 [2024-07-14 21:08:15.221818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.438 #62 NEW cov: 12123 ft: 14774 corp: 28/2894b lim: 320 exec/s: 62 rss: 70Mb L: 92/206 MS: 1 InsertByte- 00:07:18.438 [2024-07-14 21:08:15.262105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffdbffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.438 [2024-07-14 21:08:15.262131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.438 [2024-07-14 21:08:15.262195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:08080808 cdw11:08080808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x808080808080808 00:07:18.438 [2024-07-14 21:08:15.262209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.438 #63 NEW cov: 12123 ft: 14790 corp: 29/3100b lim: 320 exec/s: 63 rss: 70Mb L: 206/206 MS: 1 ChangeByte- 00:07:18.438 [2024-07-14 21:08:15.312050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:fffffffd cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:18.438 [2024-07-14 21:08:15.312076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.697 #64 pulse cov: 12123 ft: 14846 corp: 29/3100b lim: 320 exec/s: 32 rss: 70Mb 00:07:18.697 #64 NEW cov: 12123 ft: 14846 corp: 30/3169b lim: 320 exec/s: 32 rss: 70Mb L: 69/206 MS: 1 CopyPart- 00:07:18.697 #64 DONE cov: 12123 ft: 14846 corp: 30/3169b lim: 320 exec/s: 32 rss: 70Mb 00:07:18.697 Done 64 runs in 2 second(s) 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.697 21:08:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:18.697 [2024-07-14 21:08:15.504785] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:18.697 [2024-07-14 21:08:15.504875] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017515 ] 00:07:18.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.956 [2024-07-14 21:08:15.758551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.956 [2024-07-14 21:08:15.789335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.956 [2024-07-14 21:08:15.841553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.956 [2024-07-14 21:08:15.857886] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:19.215 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.215 INFO: Seed: 3267655878 00:07:19.215 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:19.215 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:19.215 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:19.215 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.215 #2 INITED exec/s: 0 rss: 62Mb 00:07:19.215 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.215 This may also happen if the target rejected all inputs we tried so far 00:07:19.215 [2024-07-14 21:08:15.906740] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:19.215 [2024-07-14 21:08:15.906862] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.215 [2024-07-14 21:08:15.907072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.215 [2024-07-14 21:08:15.907104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.215 [2024-07-14 21:08:15.907159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.215 [2024-07-14 21:08:15.907174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.474 NEW_FUNC[1/691]: 0x4942b0 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:19.474 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.474 #5 NEW cov: 11884 ft: 11861 corp: 2/14b lim: 30 exec/s: 0 rss: 69Mb L: 13/13 MS: 3 ChangeByte-CMP-InsertRepeatedBytes- DE: "\377~"- 00:07:19.474 [2024-07-14 21:08:16.217462] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:19.474 [2024-07-14 21:08:16.217586] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.474 [2024-07-14 21:08:16.217785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.217822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.474 [2024-07-14 21:08:16.217880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.217896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.474 NEW_FUNC[1/1]: 0x102ff20 in spdk_sock_prep_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk_internal/sock.h:303 00:07:19.474 #6 NEW cov: 12017 ft: 12490 corp: 3/27b lim: 30 exec/s: 0 rss: 69Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:19.474 [2024-07-14 21:08:16.267469] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:19.474 [2024-07-14 21:08:16.267582] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.474 [2024-07-14 21:08:16.267783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.267809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.474 [2024-07-14 21:08:16.267861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.267876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.474 #7 NEW cov: 12023 ft: 12737 corp: 4/40b lim: 30 exec/s: 0 rss: 69Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:19.474 [2024-07-14 21:08:16.317595] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (113204) > buf size (4096) 00:07:19.474 [2024-07-14 21:08:16.317707] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.474 [2024-07-14 21:08:16.317897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.317923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.474 [2024-07-14 21:08:16.317976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.317990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.474 #8 NEW cov: 12108 ft: 13070 corp: 5/53b lim: 30 exec/s: 0 rss: 69Mb L: 13/13 MS: 1 ChangeBit- 00:07:19.474 [2024-07-14 21:08:16.357696] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:19.474 [2024-07-14 21:08:16.357810] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.474 [2024-07-14 21:08:16.358001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.358026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.474 [2024-07-14 21:08:16.358079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.474 [2024-07-14 21:08:16.358094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.733 #9 NEW cov: 12108 ft: 13121 corp: 6/67b lim: 30 exec/s: 0 rss: 69Mb L: 14/14 MS: 1 CrossOver- 00:07:19.733 [2024-07-14 21:08:16.397808] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x4682 00:07:19.733 [2024-07-14 21:08:16.398010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d132000f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.398037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.733 #10 NEW cov: 12108 ft: 13570 corp: 7/76b lim: 30 exec/s: 0 rss: 69Mb L: 9/14 MS: 1 CMP- DE: "\3212\017HF\202*\000"- 00:07:19.733 [2024-07-14 21:08:16.437922] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8c8d 00:07:19.733 [2024-07-14 21:08:16.438034] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.733 [2024-07-14 21:08:16.438229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.438254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.733 [2024-07-14 21:08:16.438305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.438319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.733 #11 NEW cov: 12108 ft: 13641 corp: 8/89b lim: 30 exec/s: 0 rss: 69Mb L: 13/14 MS: 1 ChangeBit- 00:07:19.733 [2024-07-14 21:08:16.488086] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (112844) > buf size (4096) 00:07:19.733 [2024-07-14 21:08:16.488198] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (406068) > buf size (4096) 00:07:19.733 [2024-07-14 21:08:16.488390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e32000f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.488430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.733 [2024-07-14 21:08:16.488487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c818c cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.488506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.733 #12 NEW cov: 12108 ft: 13712 corp: 9/106b lim: 30 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 CrossOver- 00:07:19.733 [2024-07-14 21:08:16.538215] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:19.733 [2024-07-14 21:08:16.538330] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:19.733 [2024-07-14 21:08:16.538538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.538562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.733 [2024-07-14 21:08:16.538615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.538629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.733 #13 NEW cov: 12108 ft: 13756 corp: 10/123b lim: 30 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 CMP- DE: "\002\000\000\000"- 00:07:19.733 [2024-07-14 21:08:16.578342] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100006161 00:07:19.733 [2024-07-14 21:08:16.578456] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (99720) > buf size (4096) 00:07:19.733 [2024-07-14 21:08:16.578563] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:19.733 [2024-07-14 21:08:16.578761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee618161 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.578787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.733 [2024-07-14 21:08:16.578840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:61610061 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.578854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.733 [2024-07-14 21:08:16.578904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.578918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.733 #14 NEW cov: 12108 ft: 14061 corp: 11/144b lim: 30 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:19.733 [2024-07-14 21:08:16.618456] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (113204) > buf size (4096) 00:07:19.733 [2024-07-14 21:08:16.618569] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.733 [2024-07-14 21:08:16.618775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.618801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.733 [2024-07-14 21:08:16.618856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.733 [2024-07-14 21:08:16.618870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.993 #15 NEW cov: 12108 ft: 14075 corp: 12/157b lim: 30 exec/s: 0 rss: 70Mb L: 13/21 MS: 1 ShuffleBytes- 00:07:19.993 [2024-07-14 21:08:16.658534] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (113204) > buf size (4096) 00:07:19.993 [2024-07-14 21:08:16.658649] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (406068) > buf size (4096) 00:07:19.993 [2024-07-14 21:08:16.658848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.658874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.993 [2024-07-14 21:08:16.658928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c818c cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.658942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.993 #16 NEW cov: 12108 ft: 14133 corp: 13/171b lim: 30 exec/s: 0 rss: 70Mb L: 14/21 MS: 1 InsertByte- 00:07:19.993 [2024-07-14 21:08:16.708775] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:19.993 [2024-07-14 21:08:16.708886] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:19.993 [2024-07-14 21:08:16.708997] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:19.993 [2024-07-14 21:08:16.709104] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (140) > len (4) 00:07:19.993 [2024-07-14 21:08:16.709314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.709341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.993 [2024-07-14 21:08:16.709393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.709407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.993 [2024-07-14 21:08:16.709462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.709476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.993 [2024-07-14 21:08:16.709525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0000008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.709540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.993 #17 NEW cov: 12121 ft: 14707 corp: 14/199b lim: 30 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 CopyPart- 00:07:19.993 [2024-07-14 21:08:16.758813] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:19.993 [2024-07-14 21:08:16.758929] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.993 [2024-07-14 21:08:16.759145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.759171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.993 [2024-07-14 21:08:16.759224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff00008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.759239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.993 #23 NEW cov: 12121 ft: 14718 corp: 15/212b lim: 30 exec/s: 0 rss: 70Mb L: 13/28 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:07:19.993 [2024-07-14 21:08:16.798945] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:19.993 [2024-07-14 21:08:16.799060] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143536) > buf size (4096) 00:07:19.993 [2024-07-14 21:08:16.799260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.799286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.993 [2024-07-14 21:08:16.799340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c2b008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.799355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.993 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:19.993 #24 NEW cov: 12144 ft: 14756 corp: 16/226b lim: 30 exec/s: 0 rss: 70Mb L: 14/28 MS: 1 InsertByte- 00:07:19.993 [2024-07-14 21:08:16.849071] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:19.993 [2024-07-14 21:08:16.849184] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:19.993 [2024-07-14 21:08:16.849398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.849424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.993 [2024-07-14 21:08:16.849478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff00008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.993 [2024-07-14 21:08:16.849493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.993 #25 NEW cov: 12144 ft: 14814 corp: 17/239b lim: 30 exec/s: 0 rss: 70Mb L: 13/28 MS: 1 ShuffleBytes- 00:07:20.253 [2024-07-14 21:08:16.899220] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (214972) > buf size (4096) 00:07:20.253 [2024-07-14 21:08:16.899336] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8c32 00:07:20.253 [2024-07-14 21:08:16.899571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d1ee008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:16.899598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.253 [2024-07-14 21:08:16.899654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:16.899669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.253 #26 NEW cov: 12144 ft: 14827 corp: 18/254b lim: 30 exec/s: 26 rss: 70Mb L: 15/28 MS: 1 CrossOver- 00:07:20.253 [2024-07-14 21:08:16.949307] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:20.253 [2024-07-14 21:08:16.949524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:16.949550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.253 #29 NEW cov: 12144 ft: 14834 corp: 19/262b lim: 30 exec/s: 29 rss: 70Mb L: 8/28 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:20.253 [2024-07-14 21:08:16.989453] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:20.253 [2024-07-14 21:08:16.989664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:16.989690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.253 #30 NEW cov: 12144 ft: 14840 corp: 20/273b lim: 30 exec/s: 30 rss: 70Mb L: 11/28 MS: 1 EraseBytes- 00:07:20.253 [2024-07-14 21:08:17.039578] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:20.253 [2024-07-14 21:08:17.039696] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff7e 00:07:20.253 [2024-07-14 21:08:17.039908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:17.039934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.253 [2024-07-14 21:08:17.039991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:17.040005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.253 #31 NEW cov: 12144 ft: 14872 corp: 21/285b lim: 30 exec/s: 31 rss: 70Mb L: 12/28 MS: 1 EraseBytes- 00:07:20.253 [2024-07-14 21:08:17.079719] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:20.253 [2024-07-14 21:08:17.079838] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a0a 00:07:20.253 [2024-07-14 21:08:17.080035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:17.080061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.253 [2024-07-14 21:08:17.080115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:17.080129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.253 #32 NEW cov: 12144 ft: 14960 corp: 22/299b lim: 30 exec/s: 32 rss: 70Mb L: 14/28 MS: 1 CopyPart- 00:07:20.253 [2024-07-14 21:08:17.129808] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (113204) > buf size (4096) 00:07:20.253 [2024-07-14 21:08:17.130013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.253 [2024-07-14 21:08:17.130039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.253 #33 NEW cov: 12144 ft: 14977 corp: 23/310b lim: 30 exec/s: 33 rss: 70Mb L: 11/28 MS: 1 CrossOver- 00:07:20.512 [2024-07-14 21:08:17.169896] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.170112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.170137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.512 #34 NEW cov: 12144 ft: 14988 corp: 24/318b lim: 30 exec/s: 34 rss: 70Mb L: 8/28 MS: 1 ChangeByte- 00:07:20.512 [2024-07-14 21:08:17.210051] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.210165] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:20.512 [2024-07-14 21:08:17.210366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.210391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.512 [2024-07-14 21:08:17.210450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.210464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.512 #35 NEW cov: 12144 ft: 15001 corp: 25/332b lim: 30 exec/s: 35 rss: 70Mb L: 14/28 MS: 1 ShuffleBytes- 00:07:20.512 [2024-07-14 21:08:17.250221] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.250334] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.250440] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143892) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.250546] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (140) > len (4) 00:07:20.512 [2024-07-14 21:08:17.250754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.250780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.512 [2024-07-14 21:08:17.250833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.250847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.512 [2024-07-14 21:08:17.250896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c84008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.250909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.512 [2024-07-14 21:08:17.250959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0000008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.250973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.512 #36 NEW cov: 12144 ft: 15014 corp: 26/360b lim: 30 exec/s: 36 rss: 70Mb L: 28/28 MS: 1 ChangeBit- 00:07:20.512 [2024-07-14 21:08:17.300349] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:20.512 [2024-07-14 21:08:17.300470] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:20.512 [2024-07-14 21:08:17.300681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.300707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.512 [2024-07-14 21:08:17.300763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff00008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.300777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.512 #37 NEW cov: 12144 ft: 15063 corp: 27/373b lim: 30 exec/s: 37 rss: 70Mb L: 13/28 MS: 1 ChangeBinInt- 00:07:20.512 [2024-07-14 21:08:17.350510] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.350620] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:20.512 [2024-07-14 21:08:17.350827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.350852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.512 [2024-07-14 21:08:17.350906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.350921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.512 #38 NEW cov: 12144 ft: 15095 corp: 28/388b lim: 30 exec/s: 38 rss: 70Mb L: 15/28 MS: 1 InsertRepeatedBytes- 00:07:20.512 [2024-07-14 21:08:17.400575] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (112844) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.400684] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (406068) > buf size (4096) 00:07:20.512 [2024-07-14 21:08:17.400880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e32000f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.400905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.512 [2024-07-14 21:08:17.400959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c818c cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.512 [2024-07-14 21:08:17.400973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.771 #39 NEW cov: 12144 ft: 15098 corp: 29/405b lim: 30 exec/s: 39 rss: 70Mb L: 17/28 MS: 1 ShuffleBytes- 00:07:20.771 [2024-07-14 21:08:17.450787] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.450893] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.450998] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261684) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.451099] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (140) > len (4) 00:07:20.771 [2024-07-14 21:08:17.451309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.451334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.771 [2024-07-14 21:08:17.451388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.451403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.771 [2024-07-14 21:08:17.451460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.451474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.771 [2024-07-14 21:08:17.451525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0000008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.451539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.771 #40 NEW cov: 12144 ft: 15105 corp: 30/433b lim: 30 exec/s: 40 rss: 70Mb L: 28/28 MS: 1 CrossOver- 00:07:20.771 [2024-07-14 21:08:17.490847] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.491151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.491177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.771 NEW_FUNC[1/2]: 0x11dca30 in nvmf_ctrlr_unmask_aen /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2265 00:07:20.771 NEW_FUNC[2/2]: 0x11dccb0 in nvmf_get_error_log_page /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2319 00:07:20.771 #41 NEW cov: 12154 ft: 15126 corp: 31/448b lim: 30 exec/s: 41 rss: 71Mb L: 15/28 MS: 1 ChangeBinInt- 00:07:20.771 [2024-07-14 21:08:17.541067] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (768564) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.541180] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.541285] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.541385] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (930356) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.541602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c028c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.541628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.771 [2024-07-14 21:08:17.541684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.541697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.771 [2024-07-14 21:08:17.541752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.541766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.771 [2024-07-14 21:08:17.541820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:8c8c838c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.541834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.771 #42 NEW cov: 12154 ft: 15143 corp: 32/475b lim: 30 exec/s: 42 rss: 71Mb L: 27/28 MS: 1 CrossOver- 00:07:20.771 [2024-07-14 21:08:17.581114] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (112844) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.581226] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (406068) > buf size (4096) 00:07:20.771 [2024-07-14 21:08:17.581435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6e32000f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.771 [2024-07-14 21:08:17.581465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.771 [2024-07-14 21:08:17.581516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c818c cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.772 [2024-07-14 21:08:17.581531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.772 #43 NEW cov: 12154 ft: 15156 corp: 33/492b lim: 30 exec/s: 43 rss: 71Mb L: 17/28 MS: 1 ChangeByte- 00:07:20.772 [2024-07-14 21:08:17.631296] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:20.772 [2024-07-14 21:08:17.631409] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:20.772 [2024-07-14 21:08:17.631520] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261684) > buf size (4096) 00:07:20.772 [2024-07-14 21:08:17.631623] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (140) > len (4) 00:07:20.772 [2024-07-14 21:08:17.631831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.772 [2024-07-14 21:08:17.631857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.772 [2024-07-14 21:08:17.631911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.772 [2024-07-14 21:08:17.631925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.772 [2024-07-14 21:08:17.631975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.772 [2024-07-14 21:08:17.631992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.772 [2024-07-14 21:08:17.632041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0000008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.772 [2024-07-14 21:08:17.632054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.772 #44 NEW cov: 12154 ft: 15179 corp: 34/520b lim: 30 exec/s: 44 rss: 71Mb L: 28/28 MS: 1 ChangeBinInt- 00:07:21.031 [2024-07-14 21:08:17.681393] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:21.031 [2024-07-14 21:08:17.681518] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:21.031 [2024-07-14 21:08:17.681717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.681743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.681797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.681811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.031 #45 NEW cov: 12154 ft: 15182 corp: 35/534b lim: 30 exec/s: 45 rss: 71Mb L: 14/28 MS: 1 ShuffleBytes- 00:07:21.031 [2024-07-14 21:08:17.711535] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:21.031 [2024-07-14 21:08:17.711645] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.031 [2024-07-14 21:08:17.711745] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (200748) > buf size (4096) 00:07:21.031 [2024-07-14 21:08:17.711844] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:21.031 [2024-07-14 21:08:17.712052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.712077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.712131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.712145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.712197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:c40a000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.712210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.712262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.712275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.031 #46 NEW cov: 12154 ft: 15226 corp: 36/558b lim: 30 exec/s: 46 rss: 71Mb L: 24/28 MS: 1 CopyPart- 00:07:21.031 [2024-07-14 21:08:17.751642] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:21.031 [2024-07-14 21:08:17.751752] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (668212) > buf size (4096) 00:07:21.031 [2024-07-14 21:08:17.751856] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8c02 00:07:21.031 [2024-07-14 21:08:17.751958] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (4100) > buf size (4096) 00:07:21.031 [2024-07-14 21:08:17.752156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.752186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.752243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8c8c028c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.752257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.752311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8cff008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.752325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.752379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.752393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.031 #47 NEW cov: 12154 ft: 15259 corp: 37/587b lim: 30 exec/s: 47 rss: 71Mb L: 29/29 MS: 1 CrossOver- 00:07:21.031 [2024-07-14 21:08:17.801741] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:21.031 [2024-07-14 21:08:17.801849] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8cff 00:07:21.031 [2024-07-14 21:08:17.802048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.802073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.031 [2024-07-14 21:08:17.802127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8cac008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.031 [2024-07-14 21:08:17.802141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.031 #48 NEW cov: 12154 ft: 15276 corp: 38/600b lim: 30 exec/s: 48 rss: 71Mb L: 13/29 MS: 1 ChangeBit- 00:07:21.031 [2024-07-14 21:08:17.841913] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100006161 00:07:21.032 [2024-07-14 21:08:17.842028] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (99720) > buf size (4096) 00:07:21.032 [2024-07-14 21:08:17.842130] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:21.032 [2024-07-14 21:08:17.842235] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (143924) > buf size (4096) 00:07:21.032 [2024-07-14 21:08:17.842452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee618161 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.032 [2024-07-14 21:08:17.842477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.032 [2024-07-14 21:08:17.842533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:61610061 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.032 [2024-07-14 21:08:17.842548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.032 [2024-07-14 21:08:17.842601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.032 [2024-07-14 21:08:17.842615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.032 [2024-07-14 21:08:17.842670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:8c8c008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.032 [2024-07-14 21:08:17.842687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.032 #49 NEW cov: 12154 ft: 15285 corp: 39/626b lim: 30 exec/s: 49 rss: 71Mb L: 26/29 MS: 1 CopyPart- 00:07:21.032 [2024-07-14 21:08:17.891955] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (244276) > buf size (4096) 00:07:21.032 [2024-07-14 21:08:17.892159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ee8c00ee cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.032 [2024-07-14 21:08:17.892185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.032 #50 NEW cov: 12154 ft: 15292 corp: 40/635b lim: 30 exec/s: 25 rss: 71Mb L: 9/29 MS: 1 CrossOver- 00:07:21.032 #50 DONE cov: 12154 ft: 15292 corp: 40/635b lim: 30 exec/s: 25 rss: 71Mb 00:07:21.032 ###### Recommended dictionary. ###### 00:07:21.032 "\377~" # Uses: 0 00:07:21.032 "\3212\017HF\202*\000" # Uses: 0 00:07:21.032 "\002\000\000\000" # Uses: 0 00:07:21.032 "\377\377\377\377\377\377\377\000" # Uses: 0 00:07:21.032 ###### End of recommended dictionary. ###### 00:07:21.032 Done 50 runs in 2 second(s) 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.291 21:08:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:21.292 [2024-07-14 21:08:18.072190] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:21.292 [2024-07-14 21:08:18.072268] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017919 ] 00:07:21.292 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.551 [2024-07-14 21:08:18.329175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.551 [2024-07-14 21:08:18.358742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.551 [2024-07-14 21:08:18.411094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.551 [2024-07-14 21:08:18.427419] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:21.551 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.551 INFO: Seed: 1541707754 00:07:21.810 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:21.810 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:21.810 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:21.810 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.810 #2 INITED exec/s: 0 rss: 62Mb 00:07:21.810 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.810 This may also happen if the target rejected all inputs we tried so far 00:07:21.810 [2024-07-14 21:08:18.497608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.810 [2024-07-14 21:08:18.497644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.810 [2024-07-14 21:08:18.497730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.810 [2024-07-14 21:08:18.497746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.070 NEW_FUNC[1/691]: 0x496d60 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:22.070 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.070 #7 NEW cov: 11820 ft: 11806 corp: 2/17b lim: 35 exec/s: 0 rss: 69Mb L: 16/16 MS: 5 CopyPart-InsertByte-CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:07:22.070 [2024-07-14 21:08:18.827698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.827742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.070 [2024-07-14 21:08:18.827876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.827899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.070 #8 NEW cov: 11950 ft: 12599 corp: 3/33b lim: 35 exec/s: 0 rss: 69Mb L: 16/16 MS: 1 ShuffleBytes- 00:07:22.070 [2024-07-14 21:08:18.887965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.887992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.070 [2024-07-14 21:08:18.888129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.888147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.070 [2024-07-14 21:08:18.888265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffa600ff cdw11:f500f59f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.888281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.070 #14 NEW cov: 11956 ft: 13052 corp: 4/60b lim: 35 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 CopyPart- 00:07:22.070 [2024-07-14 21:08:18.927878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.927906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.070 [2024-07-14 21:08:18.928046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:ff000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.928063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.070 #15 NEW cov: 12041 ft: 13368 corp: 5/76b lim: 35 exec/s: 0 rss: 69Mb L: 16/27 MS: 1 ChangeBinInt- 00:07:22.070 [2024-07-14 21:08:18.968468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aa6000a cdw11:9f00a6f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.968494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.070 [2024-07-14 21:08:18.968627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.968647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.070 [2024-07-14 21:08:18.968771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f5ff009f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.968787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.070 [2024-07-14 21:08:18.968908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.070 [2024-07-14 21:08:18.968926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.329 #16 NEW cov: 12041 ft: 13924 corp: 6/104b lim: 35 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 CrossOver- 00:07:22.329 [2024-07-14 21:08:19.018551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aa6000a cdw11:f5000aa6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.329 [2024-07-14 21:08:19.018577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.329 [2024-07-14 21:08:19.018708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00f5 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.329 [2024-07-14 21:08:19.018723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.329 [2024-07-14 21:08:19.018833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.329 [2024-07-14 21:08:19.018850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.329 [2024-07-14 21:08:19.018963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.329 [2024-07-14 21:08:19.018978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.329 #17 NEW cov: 12041 ft: 14016 corp: 7/132b lim: 35 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 CrossOver- 00:07:22.329 [2024-07-14 21:08:19.068744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.329 [2024-07-14 21:08:19.068771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.329 [2024-07-14 21:08:19.068909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.329 [2024-07-14 21:08:19.068926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.069047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffa600ff cdw11:f500f59f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.069063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.069179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:303000ff cdw11:ff003030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.069194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.330 #18 NEW cov: 12041 ft: 14065 corp: 8/163b lim: 35 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:22.330 [2024-07-14 21:08:19.118879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.118905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.119022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.119038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.119160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.119175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.119308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.119323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.330 #19 NEW cov: 12041 ft: 14151 corp: 9/191b lim: 35 exec/s: 0 rss: 70Mb L: 28/31 MS: 1 InsertRepeatedBytes- 00:07:22.330 [2024-07-14 21:08:19.158991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.159018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.159135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.159150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.159269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.159285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.159402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.159418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.330 #20 NEW cov: 12041 ft: 14184 corp: 10/224b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:22.330 [2024-07-14 21:08:19.208419] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.330 [2024-07-14 21:08:19.208784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:00009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.208812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.330 [2024-07-14 21:08:19.208933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:10000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.330 [2024-07-14 21:08:19.208952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.589 #21 NEW cov: 12050 ft: 14272 corp: 11/240b lim: 35 exec/s: 0 rss: 70Mb L: 16/33 MS: 1 ChangeBinInt- 00:07:22.589 [2024-07-14 21:08:19.259251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.259277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.259398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff9f00ff cdw11:ff00f5ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.259414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.259528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.259547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.259667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:a6f500ff cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.259681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.589 #22 NEW cov: 12050 ft: 14282 corp: 12/273b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 CopyPart- 00:07:22.589 [2024-07-14 21:08:19.298943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.298967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.299098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.299114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.589 #23 NEW cov: 12050 ft: 14306 corp: 13/288b lim: 35 exec/s: 0 rss: 70Mb L: 15/33 MS: 1 CrossOver- 00:07:22.589 [2024-07-14 21:08:19.339496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aa6000a cdw11:9f00a6f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.339522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.339631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.339646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.339756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f5fb009f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.339770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.339889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.339903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.589 #24 NEW cov: 12050 ft: 14342 corp: 14/316b lim: 35 exec/s: 0 rss: 70Mb L: 28/33 MS: 1 ChangeBit- 00:07:22.589 [2024-07-14 21:08:19.379242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.379267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.379401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.379417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.589 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:22.589 #25 NEW cov: 12073 ft: 14398 corp: 15/333b lim: 35 exec/s: 0 rss: 70Mb L: 17/33 MS: 1 InsertByte- 00:07:22.589 [2024-07-14 21:08:19.419307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.419332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.589 [2024-07-14 21:08:19.419458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0a00ff cdw11:9f00a6f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.419473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.589 #26 NEW cov: 12073 ft: 14441 corp: 16/349b lim: 35 exec/s: 0 rss: 70Mb L: 16/33 MS: 1 CrossOver- 00:07:22.589 [2024-07-14 21:08:19.459204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:00009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.589 [2024-07-14 21:08:19.459231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.849 #27 NEW cov: 12073 ft: 14846 corp: 17/360b lim: 35 exec/s: 27 rss: 70Mb L: 11/33 MS: 1 EraseBytes- 00:07:22.849 [2024-07-14 21:08:19.509836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.509864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.510005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.510022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.510150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffa600ff cdw11:f5000260 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.510167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.849 #28 NEW cov: 12073 ft: 14856 corp: 18/387b lim: 35 exec/s: 28 rss: 70Mb L: 27/33 MS: 1 ChangeBinInt- 00:07:22.849 [2024-07-14 21:08:19.550224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.550251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.550389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.550405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.550536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffa600ff cdw11:f5000260 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.550552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.550670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.550687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.849 #29 NEW cov: 12073 ft: 14879 corp: 19/415b lim: 35 exec/s: 29 rss: 70Mb L: 28/33 MS: 1 InsertByte- 00:07:22.849 [2024-07-14 21:08:19.599619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:0a009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.599648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.849 #30 NEW cov: 12073 ft: 14899 corp: 20/423b lim: 35 exec/s: 30 rss: 70Mb L: 8/33 MS: 1 CrossOver- 00:07:22.849 [2024-07-14 21:08:19.649929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.649955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.650090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.650106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.849 #31 NEW cov: 12073 ft: 14901 corp: 21/440b lim: 35 exec/s: 31 rss: 70Mb L: 17/33 MS: 1 InsertByte- 00:07:22.849 [2024-07-14 21:08:19.689950] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:22.849 [2024-07-14 21:08:19.690292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.690321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.690438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbfc000b cdw11:8200f447 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.690457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.690582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff010000 cdw11:05000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.690602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.849 #32 NEW cov: 12073 ft: 14913 corp: 22/464b lim: 35 exec/s: 32 rss: 70Mb L: 24/33 MS: 1 CMP- DE: "\013\273\374\364G\202*\000"- 00:07:22.849 [2024-07-14 21:08:19.730624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aa6000a cdw11:9f00a6f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.730651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.730770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.730788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.730913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f5ff009f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.730933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.849 [2024-07-14 21:08:19.731050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.849 [2024-07-14 21:08:19.731066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.849 #33 NEW cov: 12073 ft: 14929 corp: 23/492b lim: 35 exec/s: 33 rss: 70Mb L: 28/33 MS: 1 ChangeBit- 00:07:23.109 [2024-07-14 21:08:19.770594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.770622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.770741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:bb00ff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.770758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.770873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:478200f4 cdw11:ff002a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.770887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.109 #34 NEW cov: 12073 ft: 14943 corp: 24/515b lim: 35 exec/s: 34 rss: 70Mb L: 23/33 MS: 1 PersAutoDict- DE: "\013\273\374\364G\202*\000"- 00:07:23.109 [2024-07-14 21:08:19.820317] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.109 [2024-07-14 21:08:19.820626] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.109 [2024-07-14 21:08:19.820974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:00009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.821002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.821116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:10000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.821133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.821249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f59f00ff cdw11:0000f500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.821266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.821380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.821402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.109 #35 NEW cov: 12073 ft: 14978 corp: 25/545b lim: 35 exec/s: 35 rss: 70Mb L: 30/33 MS: 1 CopyPart- 00:07:23.109 [2024-07-14 21:08:19.860674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.860700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.860837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.860860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.109 #36 NEW cov: 12073 ft: 15000 corp: 26/560b lim: 35 exec/s: 36 rss: 70Mb L: 15/33 MS: 1 EraseBytes- 00:07:23.109 [2024-07-14 21:08:19.901191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aa6000a cdw11:f5000aa6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.901218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.901346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00f5 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.901361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.901477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.901495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.901609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.901625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.109 #37 NEW cov: 12073 ft: 15011 corp: 27/588b lim: 35 exec/s: 37 rss: 70Mb L: 28/33 MS: 1 ShuffleBytes- 00:07:23.109 [2024-07-14 21:08:19.950677] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.109 [2024-07-14 21:08:19.951154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.951184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.951308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:bb00ff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.951329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.109 [2024-07-14 21:08:19.951449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:478200f4 cdw11:ff002a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:19.951466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.109 #38 NEW cov: 12073 ft: 15089 corp: 28/611b lim: 35 exec/s: 38 rss: 70Mb L: 23/33 MS: 1 ChangeBinInt- 00:07:23.109 [2024-07-14 21:08:20.000902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:00009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.109 [2024-07-14 21:08:20.000929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.369 #39 NEW cov: 12073 ft: 15120 corp: 29/622b lim: 35 exec/s: 39 rss: 70Mb L: 11/33 MS: 1 ShuffleBytes- 00:07:23.369 [2024-07-14 21:08:20.051156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.051184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.051321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.051339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.369 #40 NEW cov: 12073 ft: 15147 corp: 30/639b lim: 35 exec/s: 40 rss: 70Mb L: 17/33 MS: 1 CopyPart- 00:07:23.369 [2024-07-14 21:08:20.101760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:66009f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.101790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.101914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.101930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.102045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00f5 cdw11:fc000bbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.102062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.102177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:822a0047 cdw11:010000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.102197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.369 #41 NEW cov: 12073 ft: 15190 corp: 31/673b lim: 35 exec/s: 41 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:23.369 [2024-07-14 21:08:20.151601] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.369 [2024-07-14 21:08:20.151985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.152012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.152129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff9f00ff cdw11:ff00f5ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.152147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.152268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff0300ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.152284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.152398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:590a0000 cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.152422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.369 #42 NEW cov: 12073 ft: 15234 corp: 32/706b lim: 35 exec/s: 42 rss: 70Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:23.369 [2024-07-14 21:08:20.202106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a27000a cdw11:a600a60a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.202134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.202258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f5ff009f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.202274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.202395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.202412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.202523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.202539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.369 #43 NEW cov: 12073 ft: 15235 corp: 33/735b lim: 35 exec/s: 43 rss: 71Mb L: 29/34 MS: 1 InsertByte- 00:07:23.369 [2024-07-14 21:08:20.242054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.242082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.242195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.242211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.369 [2024-07-14 21:08:20.242330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffa600ff cdw11:f500f59f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.369 [2024-07-14 21:08:20.242345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.369 #44 NEW cov: 12073 ft: 15242 corp: 34/762b lim: 35 exec/s: 44 rss: 71Mb L: 27/34 MS: 1 ChangeBit- 00:07:23.629 [2024-07-14 21:08:20.281639] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:23.629 [2024-07-14 21:08:20.282147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.282176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.282300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:bb00ff0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.282322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.282438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:82fc00f4 cdw11:ff002a47 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.282458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.629 #45 NEW cov: 12073 ft: 15255 corp: 35/785b lim: 35 exec/s: 45 rss: 71Mb L: 23/34 MS: 1 ShuffleBytes- 00:07:23.629 [2024-07-14 21:08:20.332013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff009ff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.332040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.332168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00f7ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.332186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.629 #46 NEW cov: 12073 ft: 15296 corp: 36/801b lim: 35 exec/s: 46 rss: 71Mb L: 16/34 MS: 1 ChangeBit- 00:07:23.629 [2024-07-14 21:08:20.372556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aa6000a cdw11:9f00a6f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.372583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.372700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff007fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.372720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.372838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f5ff009f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.372854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.372973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.372988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.629 #47 NEW cov: 12073 ft: 15305 corp: 37/829b lim: 35 exec/s: 47 rss: 71Mb L: 28/34 MS: 1 ChangeByte- 00:07:23.629 [2024-07-14 21:08:20.422453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.422481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.422610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.422627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.629 [2024-07-14 21:08:20.422749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.422764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.629 #49 NEW cov: 12073 ft: 15313 corp: 38/854b lim: 35 exec/s: 49 rss: 71Mb L: 25/34 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:23.629 [2024-07-14 21:08:20.462145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a6f5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.629 [2024-07-14 21:08:20.462170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.629 #50 NEW cov: 12073 ft: 15314 corp: 39/864b lim: 35 exec/s: 25 rss: 71Mb L: 10/34 MS: 1 EraseBytes- 00:07:23.629 #50 DONE cov: 12073 ft: 15314 corp: 39/864b lim: 35 exec/s: 25 rss: 71Mb 00:07:23.629 ###### Recommended dictionary. ###### 00:07:23.629 "\013\273\374\364G\202*\000" # Uses: 1 00:07:23.629 ###### End of recommended dictionary. ###### 00:07:23.629 Done 50 runs in 2 second(s) 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.889 21:08:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:23.889 [2024-07-14 21:08:20.644203] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:23.889 [2024-07-14 21:08:20.644290] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018448 ] 00:07:23.889 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.148 [2024-07-14 21:08:20.898142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.148 [2024-07-14 21:08:20.928663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.148 [2024-07-14 21:08:20.980795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.148 [2024-07-14 21:08:20.997112] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:24.148 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.148 INFO: Seed: 4112714712 00:07:24.148 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:24.148 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:24.148 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:24.148 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.148 #2 INITED exec/s: 0 rss: 62Mb 00:07:24.149 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.149 This may also happen if the target rejected all inputs we tried so far 00:07:24.407 [2024-07-14 21:08:21.052533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.407 [2024-07-14 21:08:21.052565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.666 NEW_FUNC[1/697]: 0x498a30 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:24.666 NEW_FUNC[2/697]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.666 #3 NEW cov: 11969 ft: 11948 corp: 2/12b lim: 20 exec/s: 0 rss: 69Mb L: 11/11 MS: 1 InsertRepeatedBytes- 00:07:24.666 [2024-07-14 21:08:21.383943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.666 [2024-07-14 21:08:21.384003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.666 NEW_FUNC[1/3]: 0x134e370 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:777 00:07:24.666 NEW_FUNC[2/3]: 0x136f6b0 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3517 00:07:24.666 #4 NEW cov: 12200 ft: 12979 corp: 3/29b lim: 20 exec/s: 0 rss: 69Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:24.666 [2024-07-14 21:08:21.453465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.666 [2024-07-14 21:08:21.453494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.666 #5 NEW cov: 12206 ft: 13307 corp: 4/40b lim: 20 exec/s: 0 rss: 69Mb L: 11/17 MS: 1 ChangeBit- 00:07:24.666 [2024-07-14 21:08:21.493590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.666 [2024-07-14 21:08:21.493618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.666 #6 NEW cov: 12291 ft: 13596 corp: 5/51b lim: 20 exec/s: 0 rss: 69Mb L: 11/17 MS: 1 CrossOver- 00:07:24.666 #8 NEW cov: 12291 ft: 13791 corp: 6/59b lim: 20 exec/s: 0 rss: 69Mb L: 8/17 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:24.924 [2024-07-14 21:08:21.584056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.924 [2024-07-14 21:08:21.584083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.924 #14 NEW cov: 12291 ft: 13948 corp: 7/75b lim: 20 exec/s: 0 rss: 70Mb L: 16/17 MS: 1 CrossOver- 00:07:24.924 [2024-07-14 21:08:21.624340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.924 [2024-07-14 21:08:21.624369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.924 #15 NEW cov: 12291 ft: 14038 corp: 8/92b lim: 20 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ShuffleBytes- 00:07:24.924 #16 NEW cov: 12294 ft: 14250 corp: 9/103b lim: 20 exec/s: 0 rss: 70Mb L: 11/17 MS: 1 ChangeBit- 00:07:24.924 [2024-07-14 21:08:21.724273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.925 [2024-07-14 21:08:21.724301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.925 #17 NEW cov: 12294 ft: 14281 corp: 10/114b lim: 20 exec/s: 0 rss: 70Mb L: 11/17 MS: 1 ShuffleBytes- 00:07:24.925 [2024-07-14 21:08:21.764418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.925 [2024-07-14 21:08:21.764450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.925 #18 NEW cov: 12294 ft: 14322 corp: 11/125b lim: 20 exec/s: 0 rss: 70Mb L: 11/17 MS: 1 CrossOver- 00:07:24.925 [2024-07-14 21:08:21.804827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.925 [2024-07-14 21:08:21.804855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.186 #19 NEW cov: 12294 ft: 14361 corp: 12/142b lim: 20 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeBinInt- 00:07:25.186 #20 NEW cov: 12294 ft: 14405 corp: 13/153b lim: 20 exec/s: 0 rss: 70Mb L: 11/17 MS: 1 ChangeByte- 00:07:25.186 [2024-07-14 21:08:21.904789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.186 [2024-07-14 21:08:21.904817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.186 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:25.186 #21 NEW cov: 12317 ft: 14427 corp: 14/164b lim: 20 exec/s: 0 rss: 70Mb L: 11/17 MS: 1 CopyPart- 00:07:25.186 [2024-07-14 21:08:21.955015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.186 [2024-07-14 21:08:21.955042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.186 #22 NEW cov: 12321 ft: 14577 corp: 15/176b lim: 20 exec/s: 0 rss: 70Mb L: 12/17 MS: 1 InsertByte- 00:07:25.186 #23 NEW cov: 12321 ft: 14586 corp: 16/187b lim: 20 exec/s: 0 rss: 70Mb L: 11/17 MS: 1 ShuffleBytes- 00:07:25.186 [2024-07-14 21:08:22.045464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.186 [2024-07-14 21:08:22.045490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.186 #24 NEW cov: 12321 ft: 14665 corp: 17/204b lim: 20 exec/s: 24 rss: 70Mb L: 17/17 MS: 1 ChangeByte- 00:07:25.186 [2024-07-14 21:08:22.085578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.186 [2024-07-14 21:08:22.085605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.577 #25 NEW cov: 12321 ft: 14680 corp: 18/221b lim: 20 exec/s: 25 rss: 70Mb L: 17/17 MS: 1 ChangeBinInt- 00:07:25.577 #26 NEW cov: 12321 ft: 14689 corp: 19/229b lim: 20 exec/s: 26 rss: 70Mb L: 8/17 MS: 1 ChangeByte- 00:07:25.577 [2024-07-14 21:08:22.185794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.577 [2024-07-14 21:08:22.185820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.577 #27 NEW cov: 12321 ft: 14706 corp: 20/245b lim: 20 exec/s: 27 rss: 70Mb L: 16/17 MS: 1 ChangeBinInt- 00:07:25.577 [2024-07-14 21:08:22.236101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.577 [2024-07-14 21:08:22.236129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.577 #28 NEW cov: 12321 ft: 14724 corp: 21/263b lim: 20 exec/s: 28 rss: 70Mb L: 18/18 MS: 1 InsertByte- 00:07:25.577 [2024-07-14 21:08:22.276238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.577 [2024-07-14 21:08:22.276265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.577 #29 NEW cov: 12321 ft: 14780 corp: 22/283b lim: 20 exec/s: 29 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:07:25.577 #30 NEW cov: 12321 ft: 14844 corp: 23/294b lim: 20 exec/s: 30 rss: 70Mb L: 11/20 MS: 1 ChangeBit- 00:07:25.577 [2024-07-14 21:08:22.366107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.577 [2024-07-14 21:08:22.366135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.577 #31 NEW cov: 12321 ft: 14859 corp: 24/305b lim: 20 exec/s: 31 rss: 70Mb L: 11/20 MS: 1 EraseBytes- 00:07:25.577 [2024-07-14 21:08:22.406200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.577 [2024-07-14 21:08:22.406227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.577 #32 NEW cov: 12321 ft: 14882 corp: 25/314b lim: 20 exec/s: 32 rss: 70Mb L: 9/20 MS: 1 EraseBytes- 00:07:25.577 [2024-07-14 21:08:22.446776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.577 [2024-07-14 21:08:22.446803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.577 #33 NEW cov: 12321 ft: 15013 corp: 26/334b lim: 20 exec/s: 33 rss: 70Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:25.836 [2024-07-14 21:08:22.486637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.836 [2024-07-14 21:08:22.486665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.836 #34 NEW cov: 12321 ft: 15061 corp: 27/346b lim: 20 exec/s: 34 rss: 70Mb L: 12/20 MS: 1 InsertByte- 00:07:25.836 #35 NEW cov: 12321 ft: 15286 corp: 28/353b lim: 20 exec/s: 35 rss: 70Mb L: 7/20 MS: 1 EraseBytes- 00:07:25.836 #36 NEW cov: 12321 ft: 15302 corp: 29/363b lim: 20 exec/s: 36 rss: 70Mb L: 10/20 MS: 1 InsertByte- 00:07:25.836 [2024-07-14 21:08:22.626801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.836 [2024-07-14 21:08:22.626828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.836 #37 NEW cov: 12321 ft: 15344 corp: 30/374b lim: 20 exec/s: 37 rss: 70Mb L: 11/20 MS: 1 ShuffleBytes- 00:07:25.836 [2024-07-14 21:08:22.667180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.836 [2024-07-14 21:08:22.667207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.836 #38 NEW cov: 12321 ft: 15347 corp: 31/391b lim: 20 exec/s: 38 rss: 70Mb L: 17/20 MS: 1 ChangeBinInt- 00:07:26.094 #39 NEW cov: 12321 ft: 15363 corp: 32/399b lim: 20 exec/s: 39 rss: 70Mb L: 8/20 MS: 1 ChangeBinInt- 00:07:26.094 [2024-07-14 21:08:22.757371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.094 [2024-07-14 21:08:22.757399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.094 #40 NEW cov: 12321 ft: 15366 corp: 33/418b lim: 20 exec/s: 40 rss: 71Mb L: 19/20 MS: 1 CopyPart- 00:07:26.094 [2024-07-14 21:08:22.807579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.094 [2024-07-14 21:08:22.807606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.094 #41 NEW cov: 12321 ft: 15416 corp: 34/437b lim: 20 exec/s: 41 rss: 71Mb L: 19/20 MS: 1 ChangeBit- 00:07:26.094 #42 NEW cov: 12321 ft: 15437 corp: 35/445b lim: 20 exec/s: 42 rss: 71Mb L: 8/20 MS: 1 ChangeBinInt- 00:07:26.094 #43 NEW cov: 12321 ft: 15451 corp: 36/460b lim: 20 exec/s: 43 rss: 71Mb L: 15/20 MS: 1 CopyPart- 00:07:26.094 [2024-07-14 21:08:22.947812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.094 [2024-07-14 21:08:22.947840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.094 #44 NEW cov: 12321 ft: 15482 corp: 37/472b lim: 20 exec/s: 44 rss: 71Mb L: 12/20 MS: 1 CopyPart- 00:07:26.353 #45 NEW cov: 12321 ft: 15506 corp: 38/479b lim: 20 exec/s: 45 rss: 71Mb L: 7/20 MS: 1 EraseBytes- 00:07:26.353 [2024-07-14 21:08:23.027974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.353 [2024-07-14 21:08:23.028004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.353 #46 NEW cov: 12321 ft: 15514 corp: 39/494b lim: 20 exec/s: 23 rss: 71Mb L: 15/20 MS: 1 EraseBytes- 00:07:26.353 #46 DONE cov: 12321 ft: 15514 corp: 39/494b lim: 20 exec/s: 23 rss: 71Mb 00:07:26.353 Done 46 runs in 2 second(s) 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.353 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.354 21:08:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:26.354 [2024-07-14 21:08:23.211438] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:26.354 [2024-07-14 21:08:23.211518] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018895 ] 00:07:26.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.612 [2024-07-14 21:08:23.382965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.612 [2024-07-14 21:08:23.403918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.612 [2024-07-14 21:08:23.456150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.612 [2024-07-14 21:08:23.472482] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:26.612 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.612 INFO: Seed: 2293752600 00:07:26.612 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:26.612 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:26.612 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:26.612 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.612 #2 INITED exec/s: 0 rss: 65Mb 00:07:26.612 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:26.612 This may also happen if the target rejected all inputs we tried so far 00:07:26.871 [2024-07-14 21:08:23.527802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f00 cdw11:48fe0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.871 [2024-07-14 21:08:23.527831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.150 NEW_FUNC[1/692]: 0x499b20 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:27.150 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.150 #17 NEW cov: 11837 ft: 11842 corp: 2/8b lim: 35 exec/s: 0 rss: 71Mb L: 7/7 MS: 5 InsertByte-ChangeBit-ChangeByte-InsertByte-CMP- DE: "\177\000\000\000"- 00:07:27.150 [2024-07-14 21:08:23.838510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7f000000 cdw11:48fe0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.150 [2024-07-14 21:08:23.838552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.150 #18 NEW cov: 11971 ft: 12448 corp: 3/15b lim: 35 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 ShuffleBytes- 00:07:27.150 [2024-07-14 21:08:23.888539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ab7f00 cdw11:48fe0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.150 [2024-07-14 21:08:23.888568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.150 #19 NEW cov: 11977 ft: 12755 corp: 4/22b lim: 35 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 ChangeByte- 00:07:27.151 [2024-07-14 21:08:23.928636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7ffe0005 cdw11:b67f0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.151 [2024-07-14 21:08:23.928663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.151 #32 NEW cov: 12062 ft: 12969 corp: 5/29b lim: 35 exec/s: 0 rss: 71Mb L: 7/7 MS: 3 EraseBytes-ChangeBinInt-CopyPart- 00:07:27.151 [2024-07-14 21:08:23.978799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7f000000 cdw11:48fe0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.151 [2024-07-14 21:08:23.978825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.151 #33 NEW cov: 12062 ft: 13056 corp: 6/36b lim: 35 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 ChangeByte- 00:07:27.151 [2024-07-14 21:08:24.019048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7f000000 cdw11:48000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.151 [2024-07-14 21:08:24.019073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.151 [2024-07-14 21:08:24.019126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7747824a cdw11:0a740003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.151 [2024-07-14 21:08:24.019139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.151 #34 NEW cov: 12062 ft: 13824 corp: 7/51b lim: 35 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 CMP- DE: "\000*\202JwG\012t"- 00:07:27.409 [2024-07-14 21:08:24.059027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ab7f00 cdw11:48270001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.409 [2024-07-14 21:08:24.059054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.409 #40 NEW cov: 12062 ft: 13941 corp: 8/58b lim: 35 exec/s: 0 rss: 72Mb L: 7/15 MS: 1 ChangeByte- 00:07:27.409 [2024-07-14 21:08:24.109157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ab7f00 cdw11:48fe0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.409 [2024-07-14 21:08:24.109183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.409 #41 NEW cov: 12062 ft: 14025 corp: 9/65b lim: 35 exec/s: 0 rss: 72Mb L: 7/15 MS: 1 CopyPart- 00:07:27.409 [2024-07-14 21:08:24.149288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ab007f00 cdw11:48270001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.409 [2024-07-14 21:08:24.149316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.410 #42 NEW cov: 12062 ft: 14044 corp: 10/72b lim: 35 exec/s: 0 rss: 72Mb L: 7/15 MS: 1 ShuffleBytes- 00:07:27.410 [2024-07-14 21:08:24.199566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7f000000 cdw11:482a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.410 [2024-07-14 21:08:24.199592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.410 [2024-07-14 21:08:24.199645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:4a770082 cdw11:0a740003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.410 [2024-07-14 21:08:24.199658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.410 #43 NEW cov: 12062 ft: 14086 corp: 11/87b lim: 35 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ShuffleBytes- 00:07:27.410 [2024-07-14 21:08:24.249557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f00 cdw11:48fe0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.410 [2024-07-14 21:08:24.249582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.410 #44 NEW cov: 12062 ft: 14116 corp: 12/94b lim: 35 exec/s: 0 rss: 72Mb L: 7/15 MS: 1 PersAutoDict- DE: "\177\000\000\000"- 00:07:27.410 [2024-07-14 21:08:24.299695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7ffe0004 cdw11:b67f0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.410 [2024-07-14 21:08:24.299721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.668 #45 NEW cov: 12062 ft: 14164 corp: 13/101b lim: 35 exec/s: 0 rss: 72Mb L: 7/15 MS: 1 ChangeBit- 00:07:27.668 [2024-07-14 21:08:24.349836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:007f7f00 cdw11:00ab0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.668 [2024-07-14 21:08:24.349861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.668 #46 NEW cov: 12062 ft: 14183 corp: 14/110b lim: 35 exec/s: 0 rss: 72Mb L: 9/15 MS: 1 CopyPart- 00:07:27.669 [2024-07-14 21:08:24.399985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d2000000 cdw11:48fe0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.669 [2024-07-14 21:08:24.400011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.669 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:27.669 #48 NEW cov: 12085 ft: 14245 corp: 15/117b lim: 35 exec/s: 0 rss: 73Mb L: 7/15 MS: 2 EraseBytes-InsertByte- 00:07:27.669 [2024-07-14 21:08:24.440097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00d20a00 cdw11:00480003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.669 [2024-07-14 21:08:24.440124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.669 #49 NEW cov: 12085 ft: 14280 corp: 16/125b lim: 35 exec/s: 0 rss: 73Mb L: 8/15 MS: 1 CrossOver- 00:07:27.669 [2024-07-14 21:08:24.480598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7f000000 cdw11:48fe0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.669 [2024-07-14 21:08:24.480625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.669 [2024-07-14 21:08:24.480682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.669 [2024-07-14 21:08:24.480696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.669 [2024-07-14 21:08:24.480750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.669 [2024-07-14 21:08:24.480763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.669 #50 NEW cov: 12085 ft: 14543 corp: 17/147b lim: 35 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:07:27.669 [2024-07-14 21:08:24.520360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00f20a00 cdw11:00480003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.669 [2024-07-14 21:08:24.520385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.669 #51 NEW cov: 12085 ft: 14579 corp: 18/155b lim: 35 exec/s: 51 rss: 73Mb L: 8/22 MS: 1 ChangeBit- 00:07:27.669 [2024-07-14 21:08:24.570495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00820000 cdw11:4a770000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.669 [2024-07-14 21:08:24.570525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.928 #52 NEW cov: 12085 ft: 14591 corp: 19/165b lim: 35 exec/s: 52 rss: 73Mb L: 10/22 MS: 1 EraseBytes- 00:07:27.928 [2024-07-14 21:08:24.620652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:007d7f00 cdw11:00ab0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.928 [2024-07-14 21:08:24.620677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.928 #53 NEW cov: 12085 ft: 14600 corp: 20/174b lim: 35 exec/s: 53 rss: 73Mb L: 9/22 MS: 1 ChangeBit- 00:07:27.928 [2024-07-14 21:08:24.670761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ab7f00 cdw11:487f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.928 [2024-07-14 21:08:24.670786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.928 #54 NEW cov: 12085 ft: 14614 corp: 21/185b lim: 35 exec/s: 54 rss: 73Mb L: 11/22 MS: 1 PersAutoDict- DE: "\177\000\000\000"- 00:07:27.928 [2024-07-14 21:08:24.710846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ab7f00 cdw11:48270003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.928 [2024-07-14 21:08:24.710872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.928 #55 NEW cov: 12085 ft: 14655 corp: 22/192b lim: 35 exec/s: 55 rss: 73Mb L: 7/22 MS: 1 ChangeBit- 00:07:27.928 [2024-07-14 21:08:24.751284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.928 [2024-07-14 21:08:24.751310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.928 [2024-07-14 21:08:24.751366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.928 [2024-07-14 21:08:24.751380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.928 [2024-07-14 21:08:24.751435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.928 [2024-07-14 21:08:24.751454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.928 #56 NEW cov: 12085 ft: 14665 corp: 23/218b lim: 35 exec/s: 56 rss: 73Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:27.928 [2024-07-14 21:08:24.801130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f3f cdw11:ab480000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.928 [2024-07-14 21:08:24.801156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.928 #57 NEW cov: 12085 ft: 14682 corp: 24/226b lim: 35 exec/s: 57 rss: 73Mb L: 8/26 MS: 1 InsertByte- 00:07:28.188 [2024-07-14 21:08:24.841407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:482700ab cdw11:b6000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:24.841432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.188 [2024-07-14 21:08:24.841488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7747824a cdw11:0a740003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:24.841502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.188 #58 NEW cov: 12085 ft: 14696 corp: 25/241b lim: 35 exec/s: 58 rss: 73Mb L: 15/26 MS: 1 CrossOver- 00:07:28.188 [2024-07-14 21:08:24.881367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f00 cdw11:48fe0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:24.881393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.188 #59 NEW cov: 12085 ft: 14718 corp: 26/248b lim: 35 exec/s: 59 rss: 73Mb L: 7/26 MS: 1 ShuffleBytes- 00:07:28.188 [2024-07-14 21:08:24.931501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:44570a0a cdw11:83ff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:24.931527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.188 #61 NEW cov: 12085 ft: 14730 corp: 27/258b lim: 35 exec/s: 61 rss: 73Mb L: 10/26 MS: 2 CopyPart-CMP- DE: "DW\203\377J\202*\000"- 00:07:28.188 [2024-07-14 21:08:24.971551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b6ab7ffe cdw11:48270001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:24.971576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.188 #62 NEW cov: 12085 ft: 14740 corp: 28/265b lim: 35 exec/s: 62 rss: 73Mb L: 7/26 MS: 1 CrossOver- 00:07:28.188 [2024-07-14 21:08:25.001822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f0a cdw11:ab480000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:25.001847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.188 [2024-07-14 21:08:25.001901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:004800f2 cdw11:fe270001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:25.001914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.188 #68 NEW cov: 12085 ft: 14779 corp: 29/280b lim: 35 exec/s: 68 rss: 73Mb L: 15/26 MS: 1 CrossOver- 00:07:28.188 [2024-07-14 21:08:25.041796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f00 cdw11:48fe0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:25.041821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.188 #69 NEW cov: 12085 ft: 14841 corp: 30/287b lim: 35 exec/s: 69 rss: 73Mb L: 7/26 MS: 1 ShuffleBytes- 00:07:28.188 [2024-07-14 21:08:25.082092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:007d7f00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:25.082117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.188 [2024-07-14 21:08:25.082169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.188 [2024-07-14 21:08:25.082182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.447 #70 NEW cov: 12085 ft: 14846 corp: 31/306b lim: 35 exec/s: 70 rss: 73Mb L: 19/26 MS: 1 CrossOver- 00:07:28.447 [2024-07-14 21:08:25.122040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7f000000 cdw11:48fe0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.447 [2024-07-14 21:08:25.122065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.447 #71 NEW cov: 12085 ft: 14866 corp: 32/313b lim: 35 exec/s: 71 rss: 73Mb L: 7/26 MS: 1 ChangeByte- 00:07:28.447 [2024-07-14 21:08:25.162099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00f20a00 cdw11:003a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.447 [2024-07-14 21:08:25.162125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.447 #72 NEW cov: 12085 ft: 14891 corp: 33/321b lim: 35 exec/s: 72 rss: 74Mb L: 8/26 MS: 1 ChangeByte- 00:07:28.447 [2024-07-14 21:08:25.212421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007f00 cdw11:ab480000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.447 [2024-07-14 21:08:25.212452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.447 [2024-07-14 21:08:25.212507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:004800f2 cdw11:fe270001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.447 [2024-07-14 21:08:25.212520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.447 #73 NEW cov: 12085 ft: 14902 corp: 34/336b lim: 35 exec/s: 73 rss: 74Mb L: 15/26 MS: 1 ChangeByte- 00:07:28.447 [2024-07-14 21:08:25.262415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00f20a00 cdw11:00ba0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.447 [2024-07-14 21:08:25.262446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.447 #74 NEW cov: 12085 ft: 14907 corp: 35/344b lim: 35 exec/s: 74 rss: 74Mb L: 8/26 MS: 1 ChangeBit- 00:07:28.447 [2024-07-14 21:08:25.312724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:007d7f00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.447 [2024-07-14 21:08:25.312749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.447 [2024-07-14 21:08:25.312805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.447 [2024-07-14 21:08:25.312818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.447 #75 NEW cov: 12085 ft: 14922 corp: 36/363b lim: 35 exec/s: 75 rss: 74Mb L: 19/26 MS: 1 ChangeBinInt- 00:07:28.706 [2024-07-14 21:08:25.362999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.706 [2024-07-14 21:08:25.363026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.706 [2024-07-14 21:08:25.363081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7f00ffff cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.706 [2024-07-14 21:08:25.363095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.706 [2024-07-14 21:08:25.363148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00f24800 cdw11:00480003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.706 [2024-07-14 21:08:25.363162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.707 #76 NEW cov: 12085 ft: 14929 corp: 37/387b lim: 35 exec/s: 76 rss: 74Mb L: 24/26 MS: 1 InsertRepeatedBytes- 00:07:28.707 [2024-07-14 21:08:25.412852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffd10afd cdw11:00480003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.707 [2024-07-14 21:08:25.412878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.707 #77 NEW cov: 12085 ft: 14934 corp: 38/395b lim: 35 exec/s: 77 rss: 74Mb L: 8/26 MS: 1 ChangeBinInt- 00:07:28.707 [2024-07-14 21:08:25.452953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ab487fb6 cdw11:27b60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.707 [2024-07-14 21:08:25.452978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.707 #78 NEW cov: 12085 ft: 14938 corp: 39/407b lim: 35 exec/s: 78 rss: 74Mb L: 12/26 MS: 1 CopyPart- 00:07:28.707 [2024-07-14 21:08:25.503245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.707 [2024-07-14 21:08:25.503270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.707 [2024-07-14 21:08:25.503327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:4a7703b3 cdw11:0a740003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.707 [2024-07-14 21:08:25.503341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.707 #79 NEW cov: 12085 ft: 14944 corp: 40/422b lim: 35 exec/s: 39 rss: 74Mb L: 15/26 MS: 1 CMP- DE: "\000\000\000\000\000\000\003\263"- 00:07:28.707 #79 DONE cov: 12085 ft: 14944 corp: 40/422b lim: 35 exec/s: 39 rss: 74Mb 00:07:28.707 ###### Recommended dictionary. ###### 00:07:28.707 "\177\000\000\000" # Uses: 2 00:07:28.707 "\000*\202JwG\012t" # Uses: 0 00:07:28.707 "DW\203\377J\202*\000" # Uses: 0 00:07:28.707 "\000\000\000\000\000\000\003\263" # Uses: 0 00:07:28.707 ###### End of recommended dictionary. ###### 00:07:28.707 Done 79 runs in 2 second(s) 00:07:28.966 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:28.967 21:08:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:28.967 [2024-07-14 21:08:25.683440] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:28.967 [2024-07-14 21:08:25.683519] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019277 ] 00:07:28.967 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.226 [2024-07-14 21:08:25.941360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.226 [2024-07-14 21:08:25.972305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.226 [2024-07-14 21:08:26.024711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.226 [2024-07-14 21:08:26.041004] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:29.226 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.226 INFO: Seed: 565773672 00:07:29.226 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:29.226 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:29.226 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:29.226 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.226 #2 INITED exec/s: 0 rss: 62Mb 00:07:29.226 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.226 This may also happen if the target rejected all inputs we tried so far 00:07:29.226 [2024-07-14 21:08:26.111843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.226 [2024-07-14 21:08:26.111881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.226 [2024-07-14 21:08:26.111951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.226 [2024-07-14 21:08:26.111969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.743 NEW_FUNC[1/692]: 0x49bcb0 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:29.743 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.743 #9 NEW cov: 11846 ft: 11843 corp: 2/21b lim: 45 exec/s: 0 rss: 69Mb L: 20/20 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:29.743 [2024-07-14 21:08:26.452419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.452462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.743 [2024-07-14 21:08:26.452603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.452620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.743 [2024-07-14 21:08:26.452734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.452750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.743 [2024-07-14 21:08:26.452869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.452887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.743 #14 NEW cov: 11982 ft: 12899 corp: 3/59b lim: 45 exec/s: 0 rss: 69Mb L: 38/38 MS: 5 CrossOver-ShuffleBytes-EraseBytes-CrossOver-InsertRepeatedBytes- 00:07:29.743 [2024-07-14 21:08:26.491717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.491745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.743 [2024-07-14 21:08:26.491864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:27cdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.491882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.743 #15 NEW cov: 11988 ft: 13158 corp: 4/80b lim: 45 exec/s: 0 rss: 69Mb L: 21/38 MS: 1 InsertByte- 00:07:29.743 [2024-07-14 21:08:26.541664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.541692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.743 [2024-07-14 21:08:26.541806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.541822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.743 #16 NEW cov: 12073 ft: 13488 corp: 5/105b lim: 45 exec/s: 0 rss: 69Mb L: 25/38 MS: 1 CrossOver- 00:07:29.743 [2024-07-14 21:08:26.581728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.743 [2024-07-14 21:08:26.581755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.744 [2024-07-14 21:08:26.581873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.744 [2024-07-14 21:08:26.581891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.744 #17 NEW cov: 12073 ft: 13621 corp: 6/125b lim: 45 exec/s: 0 rss: 69Mb L: 20/38 MS: 1 ShuffleBytes- 00:07:29.744 [2024-07-14 21:08:26.632257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.744 [2024-07-14 21:08:26.632283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.744 [2024-07-14 21:08:26.632415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.744 [2024-07-14 21:08:26.632431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.003 #18 NEW cov: 12073 ft: 13751 corp: 7/145b lim: 45 exec/s: 0 rss: 69Mb L: 20/38 MS: 1 ShuffleBytes- 00:07:30.003 [2024-07-14 21:08:26.682436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.682466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.003 [2024-07-14 21:08:26.682593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:27cdcdcd cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.682610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.003 #19 NEW cov: 12073 ft: 13801 corp: 8/166b lim: 45 exec/s: 0 rss: 69Mb L: 21/38 MS: 1 CMP- DE: "\377\377\377U"- 00:07:30.003 [2024-07-14 21:08:26.742642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.742669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.003 [2024-07-14 21:08:26.742794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.742811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.003 #20 NEW cov: 12073 ft: 13813 corp: 9/184b lim: 45 exec/s: 0 rss: 70Mb L: 18/38 MS: 1 CrossOver- 00:07:30.003 [2024-07-14 21:08:26.792365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:2dcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.792392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.003 [2024-07-14 21:08:26.792519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.792537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.003 #21 NEW cov: 12073 ft: 13839 corp: 10/210b lim: 45 exec/s: 0 rss: 70Mb L: 26/38 MS: 1 InsertByte- 00:07:30.003 [2024-07-14 21:08:26.852917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.852942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.003 [2024-07-14 21:08:26.853062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.853077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.003 #22 NEW cov: 12073 ft: 13868 corp: 11/228b lim: 45 exec/s: 0 rss: 70Mb L: 18/38 MS: 1 ShuffleBytes- 00:07:30.003 [2024-07-14 21:08:26.903135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cccd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.903162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.003 [2024-07-14 21:08:26.903289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.003 [2024-07-14 21:08:26.903305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.263 #23 NEW cov: 12073 ft: 13902 corp: 12/248b lim: 45 exec/s: 0 rss: 70Mb L: 20/38 MS: 1 ChangeBit- 00:07:30.263 [2024-07-14 21:08:26.943127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdff81cd cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.263 [2024-07-14 21:08:26.943154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.263 [2024-07-14 21:08:26.943272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.263 [2024-07-14 21:08:26.943288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.263 #24 NEW cov: 12073 ft: 13924 corp: 13/268b lim: 45 exec/s: 0 rss: 70Mb L: 20/38 MS: 1 PersAutoDict- DE: "\377\377\377U"- 00:07:30.263 [2024-07-14 21:08:26.983186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.263 [2024-07-14 21:08:26.983212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.263 [2024-07-14 21:08:26.983330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:27cdcdcd cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.263 [2024-07-14 21:08:26.983347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.263 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:30.263 #25 NEW cov: 12096 ft: 13959 corp: 14/293b lim: 45 exec/s: 0 rss: 70Mb L: 25/38 MS: 1 PersAutoDict- DE: "\377\377\377U"- 00:07:30.263 [2024-07-14 21:08:27.043472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.263 [2024-07-14 21:08:27.043498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.263 [2024-07-14 21:08:27.043608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:27cdcdcd cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.263 [2024-07-14 21:08:27.043625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.263 #26 NEW cov: 12096 ft: 14015 corp: 15/317b lim: 45 exec/s: 0 rss: 70Mb L: 24/38 MS: 1 CopyPart- 00:07:30.263 [2024-07-14 21:08:27.094252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.263 [2024-07-14 21:08:27.094281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.264 [2024-07-14 21:08:27.094422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.264 [2024-07-14 21:08:27.094438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.264 [2024-07-14 21:08:27.094567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.264 [2024-07-14 21:08:27.094583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.264 [2024-07-14 21:08:27.094709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.264 [2024-07-14 21:08:27.094724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.264 #27 NEW cov: 12096 ft: 14034 corp: 16/353b lim: 45 exec/s: 27 rss: 70Mb L: 36/38 MS: 1 InsertRepeatedBytes- 00:07:30.264 [2024-07-14 21:08:27.143826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.264 [2024-07-14 21:08:27.143852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.264 [2024-07-14 21:08:27.143977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:27cdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.264 [2024-07-14 21:08:27.143993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.264 #28 NEW cov: 12096 ft: 14109 corp: 17/374b lim: 45 exec/s: 28 rss: 70Mb L: 21/38 MS: 1 ChangeByte- 00:07:30.523 [2024-07-14 21:08:27.193707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.523 [2024-07-14 21:08:27.193733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.523 #29 NEW cov: 12096 ft: 14868 corp: 18/390b lim: 45 exec/s: 29 rss: 70Mb L: 16/38 MS: 1 EraseBytes- 00:07:30.523 [2024-07-14 21:08:27.244002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.523 [2024-07-14 21:08:27.244029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.523 [2024-07-14 21:08:27.244161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.523 [2024-07-14 21:08:27.244180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.523 #30 NEW cov: 12096 ft: 14933 corp: 19/408b lim: 45 exec/s: 30 rss: 70Mb L: 18/38 MS: 1 EraseBytes- 00:07:30.523 [2024-07-14 21:08:27.294296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.523 [2024-07-14 21:08:27.294321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.523 [2024-07-14 21:08:27.294431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:27cd0acd cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.523 [2024-07-14 21:08:27.294459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.524 #31 NEW cov: 12096 ft: 14961 corp: 20/429b lim: 45 exec/s: 31 rss: 70Mb L: 21/38 MS: 1 CrossOver- 00:07:30.524 [2024-07-14 21:08:27.335108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff2d0aff cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.335133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.335249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffcdff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.335264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.335380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.335395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.335504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.335518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.335629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.335643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.524 #37 NEW cov: 12096 ft: 15029 corp: 21/474b lim: 45 exec/s: 37 rss: 70Mb L: 45/45 MS: 1 CrossOver- 00:07:30.524 [2024-07-14 21:08:27.384497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.384524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.384636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.384653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.524 #38 NEW cov: 12096 ft: 15107 corp: 22/500b lim: 45 exec/s: 38 rss: 70Mb L: 26/45 MS: 1 CrossOver- 00:07:30.524 [2024-07-14 21:08:27.424745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.424772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.424884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.424902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.425015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.425032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.524 [2024-07-14 21:08:27.425177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.524 [2024-07-14 21:08:27.425192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.783 #39 NEW cov: 12096 ft: 15132 corp: 23/543b lim: 45 exec/s: 39 rss: 70Mb L: 43/45 MS: 1 InsertRepeatedBytes- 00:07:30.783 [2024-07-14 21:08:27.475356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.475384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.475501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.475517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.475638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.475655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.475768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.475784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.783 #40 NEW cov: 12096 ft: 15152 corp: 24/582b lim: 45 exec/s: 40 rss: 70Mb L: 39/45 MS: 1 CopyPart- 00:07:30.783 [2024-07-14 21:08:27.524657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a030a03 cdw11:00000006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.524684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.783 #43 NEW cov: 12096 ft: 15161 corp: 25/592b lim: 45 exec/s: 43 rss: 70Mb L: 10/45 MS: 3 CMP-CrossOver-CopyPart- DE: "\003\000\000\000"- 00:07:30.783 [2024-07-14 21:08:27.564593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.564621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.564741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.564759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.783 #44 NEW cov: 12096 ft: 15214 corp: 26/612b lim: 45 exec/s: 44 rss: 70Mb L: 20/45 MS: 1 ChangeBinInt- 00:07:30.783 [2024-07-14 21:08:27.615753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:656581cd cdw11:65650003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.615779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.615917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65650006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.615933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.616053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.616068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.616185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5555ffff cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.616200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.783 #45 NEW cov: 12096 ft: 15225 corp: 27/649b lim: 45 exec/s: 45 rss: 70Mb L: 37/45 MS: 1 InsertRepeatedBytes- 00:07:30.783 [2024-07-14 21:08:27.675362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.675389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.783 [2024-07-14 21:08:27.675506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.783 [2024-07-14 21:08:27.675523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.043 #46 NEW cov: 12096 ft: 15263 corp: 28/669b lim: 45 exec/s: 46 rss: 70Mb L: 20/45 MS: 1 ChangeBinInt- 00:07:31.043 [2024-07-14 21:08:27.715109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.715138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.715257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cd27cdcd cdw11:cdff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.715273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.043 #47 NEW cov: 12096 ft: 15284 corp: 29/694b lim: 45 exec/s: 47 rss: 70Mb L: 25/45 MS: 1 InsertByte- 00:07:31.043 [2024-07-14 21:08:27.765981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcdcccd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.766008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.766125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcd81 cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.766142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.766261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:cd27cdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.766278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.043 #48 NEW cov: 12096 ft: 15510 corp: 30/725b lim: 45 exec/s: 48 rss: 70Mb L: 31/45 MS: 1 CrossOver- 00:07:31.043 [2024-07-14 21:08:27.825585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcde6cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.825616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.043 #49 NEW cov: 12096 ft: 15536 corp: 31/741b lim: 45 exec/s: 49 rss: 70Mb L: 16/45 MS: 1 ChangeByte- 00:07:31.043 [2024-07-14 21:08:27.875862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcdcccd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.875890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.876021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcd81 cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.876037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.876158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:cd27cdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.876175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.043 #50 NEW cov: 12096 ft: 15555 corp: 32/773b lim: 45 exec/s: 50 rss: 70Mb L: 32/45 MS: 1 InsertByte- 00:07:31.043 [2024-07-14 21:08:27.926304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.926330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.926455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.926472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.926593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.926608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.043 [2024-07-14 21:08:27.926726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.043 [2024-07-14 21:08:27.926742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.303 #51 NEW cov: 12096 ft: 15559 corp: 33/816b lim: 45 exec/s: 51 rss: 71Mb L: 43/45 MS: 1 ShuffleBytes- 00:07:31.303 [2024-07-14 21:08:27.986392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41cd81cd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:27.986418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.303 [2024-07-14 21:08:27.986563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcd2dcd cdw11:27cd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:27.986582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.303 #52 NEW cov: 12096 ft: 15570 corp: 34/842b lim: 45 exec/s: 52 rss: 71Mb L: 26/45 MS: 1 InsertByte- 00:07:31.303 [2024-07-14 21:08:28.046515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cdcd81cd cdw11:cccd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:28.046543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.303 [2024-07-14 21:08:28.046677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcd0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:28.046697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.303 #53 NEW cov: 12096 ft: 15578 corp: 35/862b lim: 45 exec/s: 53 rss: 71Mb L: 20/45 MS: 1 CrossOver- 00:07:31.303 [2024-07-14 21:08:28.107391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:28.107417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.303 [2024-07-14 21:08:28.107529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:28.107545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.303 [2024-07-14 21:08:28.107663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:28.107679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.303 [2024-07-14 21:08:28.107798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.303 [2024-07-14 21:08:28.107814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.303 #54 NEW cov: 12096 ft: 15581 corp: 36/906b lim: 45 exec/s: 27 rss: 71Mb L: 44/45 MS: 1 InsertByte- 00:07:31.303 #54 DONE cov: 12096 ft: 15581 corp: 36/906b lim: 45 exec/s: 27 rss: 71Mb 00:07:31.303 ###### Recommended dictionary. ###### 00:07:31.303 "\377\377\377U" # Uses: 2 00:07:31.303 "\003\000\000\000" # Uses: 0 00:07:31.303 ###### End of recommended dictionary. ###### 00:07:31.303 Done 54 runs in 2 second(s) 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.563 21:08:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:31.563 [2024-07-14 21:08:28.287362] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:31.563 [2024-07-14 21:08:28.287438] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019812 ] 00:07:31.563 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.824 [2024-07-14 21:08:28.544802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.824 [2024-07-14 21:08:28.575407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.824 [2024-07-14 21:08:28.627560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.824 [2024-07-14 21:08:28.643898] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:31.824 INFO: Running with entropic power schedule (0xFF, 100). 00:07:31.824 INFO: Seed: 3169784252 00:07:31.824 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:31.824 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:31.824 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:31.824 INFO: A corpus is not provided, starting from an empty corpus 00:07:31.824 #2 INITED exec/s: 0 rss: 62Mb 00:07:31.824 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:31.824 This may also happen if the target rejected all inputs we tried so far 00:07:31.824 [2024-07-14 21:08:28.689031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000320a cdw11:00000000 00:07:31.824 [2024-07-14 21:08:28.689060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.087 NEW_FUNC[1/690]: 0x49e4c0 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:32.087 NEW_FUNC[2/690]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.087 #4 NEW cov: 11769 ft: 11769 corp: 2/3b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 2 ShuffleBytes-InsertByte- 00:07:32.345 [2024-07-14 21:08:28.999842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.345 [2024-07-14 21:08:28.999885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.345 #5 NEW cov: 11899 ft: 12410 corp: 3/5b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 CopyPart- 00:07:32.345 [2024-07-14 21:08:29.039817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008ad4 cdw11:00000000 00:07:32.345 [2024-07-14 21:08:29.039844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.345 #7 NEW cov: 11905 ft: 12582 corp: 4/7b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 2 ChangeBit-InsertByte- 00:07:32.345 [2024-07-14 21:08:29.070136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.345 [2024-07-14 21:08:29.070163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.346 [2024-07-14 21:08:29.070213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.346 [2024-07-14 21:08:29.070227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.346 [2024-07-14 21:08:29.070278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:32.346 [2024-07-14 21:08:29.070292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.346 #8 NEW cov: 11990 ft: 13098 corp: 5/13b lim: 10 exec/s: 0 rss: 69Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:32.346 [2024-07-14 21:08:29.110056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e1c cdw11:00000000 00:07:32.346 [2024-07-14 21:08:29.110081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.346 #10 NEW cov: 11990 ft: 13239 corp: 6/15b lim: 10 exec/s: 0 rss: 69Mb L: 2/6 MS: 2 ChangeBit-InsertByte- 00:07:32.346 [2024-07-14 21:08:29.140240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.346 [2024-07-14 21:08:29.140266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.346 [2024-07-14 21:08:29.140316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.346 [2024-07-14 21:08:29.140330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.346 #11 NEW cov: 11990 ft: 13473 corp: 7/19b lim: 10 exec/s: 0 rss: 69Mb L: 4/6 MS: 1 CopyPart- 00:07:32.346 [2024-07-14 21:08:29.190288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:32.346 [2024-07-14 21:08:29.190314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.346 #12 NEW cov: 11990 ft: 13569 corp: 8/21b lim: 10 exec/s: 0 rss: 69Mb L: 2/6 MS: 1 ChangeBit- 00:07:32.346 [2024-07-14 21:08:29.230383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:32.346 [2024-07-14 21:08:29.230408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.604 #15 NEW cov: 11990 ft: 13595 corp: 9/23b lim: 10 exec/s: 0 rss: 69Mb L: 2/6 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:07:32.604 [2024-07-14 21:08:29.280703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.604 [2024-07-14 21:08:29.280730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.604 [2024-07-14 21:08:29.280782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.604 [2024-07-14 21:08:29.280796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.604 #21 NEW cov: 11990 ft: 13638 corp: 10/27b lim: 10 exec/s: 0 rss: 70Mb L: 4/6 MS: 1 CopyPart- 00:07:32.604 [2024-07-14 21:08:29.320659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003d2e cdw11:00000000 00:07:32.604 [2024-07-14 21:08:29.320684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.604 #22 NEW cov: 11990 ft: 13697 corp: 11/29b lim: 10 exec/s: 0 rss: 70Mb L: 2/6 MS: 1 ChangeByte- 00:07:32.604 [2024-07-14 21:08:29.371033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e94 cdw11:00000000 00:07:32.604 [2024-07-14 21:08:29.371059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.604 [2024-07-14 21:08:29.371110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009494 cdw11:00000000 00:07:32.604 [2024-07-14 21:08:29.371123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.604 [2024-07-14 21:08:29.371175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009494 cdw11:00000000 00:07:32.604 [2024-07-14 21:08:29.371188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.604 #24 NEW cov: 11990 ft: 13753 corp: 12/35b lim: 10 exec/s: 0 rss: 70Mb L: 6/6 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:32.604 [2024-07-14 21:08:29.410919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a8a cdw11:00000000 00:07:32.604 [2024-07-14 21:08:29.410944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.604 #25 NEW cov: 11990 ft: 13773 corp: 13/37b lim: 10 exec/s: 0 rss: 70Mb L: 2/6 MS: 1 CopyPart- 00:07:32.605 [2024-07-14 21:08:29.451281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:32.605 [2024-07-14 21:08:29.451306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.605 [2024-07-14 21:08:29.451358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.605 [2024-07-14 21:08:29.451371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.605 [2024-07-14 21:08:29.451421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.605 [2024-07-14 21:08:29.451434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.605 #26 NEW cov: 11990 ft: 13800 corp: 14/44b lim: 10 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:07:32.605 [2024-07-14 21:08:29.491126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000324a cdw11:00000000 00:07:32.605 [2024-07-14 21:08:29.491152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.864 #27 NEW cov: 11990 ft: 13853 corp: 15/46b lim: 10 exec/s: 0 rss: 70Mb L: 2/7 MS: 1 CrossOver- 00:07:32.864 [2024-07-14 21:08:29.541283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000632e cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.541309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.864 #28 NEW cov: 11990 ft: 13864 corp: 16/48b lim: 10 exec/s: 0 rss: 70Mb L: 2/7 MS: 1 ChangeByte- 00:07:32.864 [2024-07-14 21:08:29.571355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000063ee cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.571379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.864 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:32.864 #30 NEW cov: 12013 ft: 13901 corp: 17/50b lim: 10 exec/s: 0 rss: 70Mb L: 2/7 MS: 2 EraseBytes-InsertByte- 00:07:32.864 [2024-07-14 21:08:29.621807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f94 cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.621833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.864 [2024-07-14 21:08:29.621885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009494 cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.621899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.864 [2024-07-14 21:08:29.621952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009494 cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.621966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.864 #31 NEW cov: 12013 ft: 13928 corp: 18/56b lim: 10 exec/s: 0 rss: 70Mb L: 6/7 MS: 1 ChangeBit- 00:07:32.864 [2024-07-14 21:08:29.671672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.671700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.864 #32 NEW cov: 12013 ft: 13949 corp: 19/58b lim: 10 exec/s: 32 rss: 70Mb L: 2/7 MS: 1 ShuffleBytes- 00:07:32.864 [2024-07-14 21:08:29.712006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.712031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.864 [2024-07-14 21:08:29.712080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.712094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.864 [2024-07-14 21:08:29.712146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.712159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.864 #33 NEW cov: 12013 ft: 13966 corp: 20/64b lim: 10 exec/s: 33 rss: 70Mb L: 6/7 MS: 1 ShuffleBytes- 00:07:32.864 [2024-07-14 21:08:29.762198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.762223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.864 [2024-07-14 21:08:29.762275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.762289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.864 [2024-07-14 21:08:29.762339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:32.864 [2024-07-14 21:08:29.762353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.124 #34 NEW cov: 12013 ft: 13991 corp: 21/70b lim: 10 exec/s: 34 rss: 70Mb L: 6/7 MS: 1 CrossOver- 00:07:33.124 [2024-07-14 21:08:29.812054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008ad4 cdw11:00000000 00:07:33.124 [2024-07-14 21:08:29.812080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.124 #35 NEW cov: 12013 ft: 14006 corp: 22/72b lim: 10 exec/s: 35 rss: 70Mb L: 2/7 MS: 1 CopyPart- 00:07:33.124 [2024-07-14 21:08:29.852210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a63 cdw11:00000000 00:07:33.124 [2024-07-14 21:08:29.852235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.125 #36 NEW cov: 12013 ft: 14078 corp: 23/75b lim: 10 exec/s: 36 rss: 70Mb L: 3/7 MS: 1 InsertByte- 00:07:33.125 [2024-07-14 21:08:29.892561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e2e cdw11:00000000 00:07:33.125 [2024-07-14 21:08:29.892586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.125 [2024-07-14 21:08:29.892639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.125 [2024-07-14 21:08:29.892653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.125 [2024-07-14 21:08:29.892705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.125 [2024-07-14 21:08:29.892719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.125 #37 NEW cov: 12013 ft: 14118 corp: 24/82b lim: 10 exec/s: 37 rss: 70Mb L: 7/7 MS: 1 CopyPart- 00:07:33.125 [2024-07-14 21:08:29.942461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.125 [2024-07-14 21:08:29.942487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.125 #38 NEW cov: 12013 ft: 14144 corp: 25/85b lim: 10 exec/s: 38 rss: 70Mb L: 3/7 MS: 1 EraseBytes- 00:07:33.125 [2024-07-14 21:08:29.982666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008686 cdw11:00000000 00:07:33.125 [2024-07-14 21:08:29.982691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.125 [2024-07-14 21:08:29.982744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000860a cdw11:00000000 00:07:33.125 [2024-07-14 21:08:29.982757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.125 #39 NEW cov: 12013 ft: 14160 corp: 26/90b lim: 10 exec/s: 39 rss: 70Mb L: 5/7 MS: 1 InsertRepeatedBytes- 00:07:33.384 [2024-07-14 21:08:30.033121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007979 cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.033148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.384 [2024-07-14 21:08:30.033204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007979 cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.033218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.384 [2024-07-14 21:08:30.033273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007979 cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.033287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.384 [2024-07-14 21:08:30.033339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003d2e cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.033352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.384 #40 NEW cov: 12013 ft: 14366 corp: 27/98b lim: 10 exec/s: 40 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:33.384 [2024-07-14 21:08:30.082912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.082940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.384 [2024-07-14 21:08:30.112952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.112978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.384 #42 NEW cov: 12013 ft: 14373 corp: 28/101b lim: 10 exec/s: 42 rss: 70Mb L: 3/8 MS: 2 CrossOver-ChangeBit- 00:07:33.384 [2024-07-14 21:08:30.143018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e1c cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.143044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.384 #43 NEW cov: 12013 ft: 14413 corp: 29/104b lim: 10 exec/s: 43 rss: 70Mb L: 3/8 MS: 1 CrossOver- 00:07:33.384 [2024-07-14 21:08:30.193198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.193223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.384 #44 NEW cov: 12013 ft: 14419 corp: 30/107b lim: 10 exec/s: 44 rss: 70Mb L: 3/8 MS: 1 EraseBytes- 00:07:33.384 [2024-07-14 21:08:30.243604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009494 cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.243632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.384 [2024-07-14 21:08:30.243685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002f94 cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.243699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.384 [2024-07-14 21:08:30.243750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009494 cdw11:00000000 00:07:33.384 [2024-07-14 21:08:30.243764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.384 #45 NEW cov: 12013 ft: 14434 corp: 31/113b lim: 10 exec/s: 45 rss: 70Mb L: 6/8 MS: 1 ShuffleBytes- 00:07:33.644 [2024-07-14 21:08:30.293473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009f0e cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.293500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.644 #46 NEW cov: 12013 ft: 14442 corp: 32/116b lim: 10 exec/s: 46 rss: 70Mb L: 3/8 MS: 1 InsertByte- 00:07:33.644 [2024-07-14 21:08:30.333603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a0a cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.333629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.644 #47 NEW cov: 12013 ft: 14460 corp: 33/118b lim: 10 exec/s: 47 rss: 70Mb L: 2/8 MS: 1 ChangeBit- 00:07:33.644 [2024-07-14 21:08:30.383824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000324a cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.383849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.644 [2024-07-14 21:08:30.383897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f700 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.383910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.644 #48 NEW cov: 12013 ft: 14481 corp: 34/122b lim: 10 exec/s: 48 rss: 70Mb L: 4/8 MS: 1 CMP- DE: "\367\000"- 00:07:33.644 [2024-07-14 21:08:30.434112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008686 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.434137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.644 [2024-07-14 21:08:30.434188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000860a cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.434203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.644 [2024-07-14 21:08:30.434253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004af1 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.434267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.644 #49 NEW cov: 12013 ft: 14501 corp: 35/128b lim: 10 exec/s: 49 rss: 71Mb L: 6/8 MS: 1 InsertByte- 00:07:33.644 [2024-07-14 21:08:30.484199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2e cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.484224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.644 [2024-07-14 21:08:30.484272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.484286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.644 [2024-07-14 21:08:30.484340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.484355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.644 #50 NEW cov: 12013 ft: 14521 corp: 36/135b lim: 10 exec/s: 50 rss: 71Mb L: 7/8 MS: 1 CrossOver- 00:07:33.644 [2024-07-14 21:08:30.524301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.524327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.644 [2024-07-14 21:08:30.524377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.524391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.644 [2024-07-14 21:08:30.524438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.644 [2024-07-14 21:08:30.524457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.644 #51 NEW cov: 12013 ft: 14523 corp: 37/142b lim: 10 exec/s: 51 rss: 71Mb L: 7/8 MS: 1 InsertRepeatedBytes- 00:07:33.903 [2024-07-14 21:08:30.564209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a63 cdw11:00000000 00:07:33.903 [2024-07-14 21:08:30.564235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.903 #57 NEW cov: 12013 ft: 14614 corp: 38/145b lim: 10 exec/s: 57 rss: 71Mb L: 3/8 MS: 1 ChangeByte- 00:07:33.903 [2024-07-14 21:08:30.614662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e94 cdw11:00000000 00:07:33.903 [2024-07-14 21:08:30.614688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.903 [2024-07-14 21:08:30.614738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f700 cdw11:00000000 00:07:33.903 [2024-07-14 21:08:30.614751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.903 [2024-07-14 21:08:30.614799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009494 cdw11:00000000 00:07:33.903 [2024-07-14 21:08:30.614812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.903 [2024-07-14 21:08:30.614860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009494 cdw11:00000000 00:07:33.903 [2024-07-14 21:08:30.614873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.903 #58 NEW cov: 12013 ft: 14628 corp: 39/153b lim: 10 exec/s: 58 rss: 71Mb L: 8/8 MS: 1 PersAutoDict- DE: "\367\000"- 00:07:33.903 [2024-07-14 21:08:30.654386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008ad4 cdw11:00000000 00:07:33.903 [2024-07-14 21:08:30.654412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.903 #59 NEW cov: 12013 ft: 14638 corp: 40/155b lim: 10 exec/s: 29 rss: 71Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:33.903 #59 DONE cov: 12013 ft: 14638 corp: 40/155b lim: 10 exec/s: 29 rss: 71Mb 00:07:33.903 ###### Recommended dictionary. ###### 00:07:33.903 "\367\000" # Uses: 1 00:07:33.903 ###### End of recommended dictionary. ###### 00:07:33.904 Done 59 runs in 2 second(s) 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:33.904 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.163 21:08:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:34.163 [2024-07-14 21:08:30.844351] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:34.163 [2024-07-14 21:08:30.844425] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020238 ] 00:07:34.163 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.422 [2024-07-14 21:08:31.091928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.422 [2024-07-14 21:08:31.122046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.422 [2024-07-14 21:08:31.174149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.422 [2024-07-14 21:08:31.190486] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:34.422 INFO: Running with entropic power schedule (0xFF, 100). 00:07:34.422 INFO: Seed: 1421805569 00:07:34.422 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:34.422 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:34.422 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:34.422 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.422 #2 INITED exec/s: 0 rss: 62Mb 00:07:34.422 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.422 This may also happen if the target rejected all inputs we tried so far 00:07:34.422 [2024-07-14 21:08:31.236026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.422 [2024-07-14 21:08:31.236055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.422 [2024-07-14 21:08:31.236110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.422 [2024-07-14 21:08:31.236124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.422 [2024-07-14 21:08:31.236177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.422 [2024-07-14 21:08:31.236191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.422 [2024-07-14 21:08:31.236239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:34.422 [2024-07-14 21:08:31.236252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.681 NEW_FUNC[1/690]: 0x49eeb0 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:34.681 NEW_FUNC[2/690]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:34.681 #3 NEW cov: 11769 ft: 11769 corp: 2/10b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000p"- 00:07:34.681 [2024-07-14 21:08:31.566556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.681 [2024-07-14 21:08:31.566591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.681 [2024-07-14 21:08:31.566640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.681 [2024-07-14 21:08:31.566654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.940 #4 NEW cov: 11899 ft: 12669 corp: 3/15b lim: 10 exec/s: 0 rss: 69Mb L: 5/9 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:34.940 [2024-07-14 21:08:31.606570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.606597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.606647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.606660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.940 #5 NEW cov: 11905 ft: 12958 corp: 4/20b lim: 10 exec/s: 0 rss: 69Mb L: 5/9 MS: 1 ChangeBinInt- 00:07:34.940 [2024-07-14 21:08:31.656995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.657023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.657072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.657085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.657133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.657146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.657194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.657207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.940 #6 NEW cov: 11990 ft: 13174 corp: 5/29b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:34.940 [2024-07-14 21:08:31.707082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.707107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.707159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.707172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.707220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.707249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.707297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.707310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.940 #7 NEW cov: 11990 ft: 13294 corp: 6/38b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:34.940 [2024-07-14 21:08:31.746860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.746885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.940 #12 NEW cov: 11990 ft: 13540 corp: 7/40b lim: 10 exec/s: 0 rss: 69Mb L: 2/9 MS: 5 CrossOver-ChangeByte-ChangeBit-CopyPart-CopyPart- 00:07:34.940 [2024-07-14 21:08:31.787315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.787340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.787390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.787403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.787456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.787469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.787517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.787530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.940 #13 NEW cov: 11990 ft: 13608 corp: 8/49b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000p"- 00:07:34.940 [2024-07-14 21:08:31.827166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b300 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.827191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.940 [2024-07-14 21:08:31.827240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 00:07:34.940 [2024-07-14 21:08:31.827253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.200 #14 NEW cov: 11990 ft: 13714 corp: 9/54b lim: 10 exec/s: 0 rss: 70Mb L: 5/9 MS: 1 ChangeByte- 00:07:35.200 [2024-07-14 21:08:31.877229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ae29 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.877254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.200 #15 NEW cov: 11990 ft: 13726 corp: 10/57b lim: 10 exec/s: 0 rss: 70Mb L: 3/9 MS: 1 InsertByte- 00:07:35.200 [2024-07-14 21:08:31.927758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.927786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:31.927833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.927846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:31.927894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.927908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:31.927955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.927967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:31.928013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003d0a cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.928027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.200 #16 NEW cov: 11990 ft: 13828 corp: 11/67b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 InsertByte- 00:07:35.200 [2024-07-14 21:08:31.977781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.977806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:31.977856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.977869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:31.977918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.977932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:31.977996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000500 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:31.978009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.200 #17 NEW cov: 11990 ft: 13841 corp: 12/76b lim: 10 exec/s: 0 rss: 70Mb L: 9/10 MS: 1 CrossOver- 00:07:35.200 [2024-07-14 21:08:32.017725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b300 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:32.017752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:32.017803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:32.017815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.200 #18 NEW cov: 11990 ft: 13887 corp: 13/81b lim: 10 exec/s: 0 rss: 70Mb L: 5/10 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:35.200 [2024-07-14 21:08:32.067957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:32.067984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:32.068035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.200 [2024-07-14 21:08:32.068049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.200 [2024-07-14 21:08:32.068100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003d0a cdw11:00000000 00:07:35.200 [2024-07-14 21:08:32.068115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.459 #19 NEW cov: 11990 ft: 14024 corp: 14/87b lim: 10 exec/s: 0 rss: 70Mb L: 6/10 MS: 1 EraseBytes- 00:07:35.459 [2024-07-14 21:08:32.117865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000038ae cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.117892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.460 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:35.460 #24 NEW cov: 12013 ft: 14080 corp: 15/90b lim: 10 exec/s: 0 rss: 70Mb L: 3/10 MS: 5 EraseBytes-ChangeByte-ShuffleBytes-ChangeASCIIInt-CrossOver- 00:07:35.460 [2024-07-14 21:08:32.158354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a6a6 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.158381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.158430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a600 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.158448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.158497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.158510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.158560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000703d cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.158574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.460 #25 NEW cov: 12013 ft: 14091 corp: 16/99b lim: 10 exec/s: 0 rss: 70Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:35.460 [2024-07-14 21:08:32.208588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.208616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.208665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.208678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.208726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.208740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.208788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.208802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.208850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003d0a cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.208864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.460 #26 NEW cov: 12013 ft: 14100 corp: 17/109b lim: 10 exec/s: 26 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:35.460 [2024-07-14 21:08:32.248631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007777 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.248658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.248709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007777 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.248722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.248774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000077ae cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.248787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.248837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000029ae cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.248850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.460 #27 NEW cov: 12013 ft: 14108 corp: 18/117b lim: 10 exec/s: 27 rss: 70Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:35.460 [2024-07-14 21:08:32.298751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.298778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.298828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.298840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.298888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.298901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.298950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000500 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.298963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.460 #28 NEW cov: 12013 ft: 14126 corp: 19/126b lim: 10 exec/s: 28 rss: 70Mb L: 9/10 MS: 1 CrossOver- 00:07:35.460 [2024-07-14 21:08:32.348881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b300 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.348906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.348956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.348969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.349019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.349032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.460 [2024-07-14 21:08:32.349080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:35.460 [2024-07-14 21:08:32.349093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.719 #29 NEW cov: 12013 ft: 14143 corp: 20/135b lim: 10 exec/s: 29 rss: 70Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:35.719 [2024-07-14 21:08:32.398676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ae29 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.398705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.719 #30 NEW cov: 12013 ft: 14160 corp: 21/138b lim: 10 exec/s: 30 rss: 70Mb L: 3/10 MS: 1 ChangeByte- 00:07:35.719 [2024-07-14 21:08:32.439136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.439162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.439208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.439221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.439267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.439280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.439326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.439339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.719 #31 NEW cov: 12013 ft: 14163 corp: 22/147b lim: 10 exec/s: 31 rss: 70Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:35.719 [2024-07-14 21:08:32.479391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.479416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.479469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.479483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.479531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.479544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.479590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.479603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.479649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.479662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.719 #32 NEW cov: 12013 ft: 14203 corp: 23/157b lim: 10 exec/s: 32 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:07:35.719 [2024-07-14 21:08:32.519492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.519518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.519567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 00:07:35.719 [2024-07-14 21:08:32.519580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.719 [2024-07-14 21:08:32.519627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.519644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.519691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.519704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.519750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.519763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.720 #33 NEW cov: 12013 ft: 14212 corp: 24/167b lim: 10 exec/s: 33 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:07:35.720 [2024-07-14 21:08:32.569553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007777 cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.569578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.569628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007777 cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.569642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.569690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000077ae cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.569704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.569749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002900 cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.569761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.720 #34 NEW cov: 12013 ft: 14255 corp: 25/175b lim: 10 exec/s: 34 rss: 70Mb L: 8/10 MS: 1 CrossOver- 00:07:35.720 [2024-07-14 21:08:32.619687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008988 cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.619713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.619762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000888e cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.619776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.619824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000077ae cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.619838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.720 [2024-07-14 21:08:32.619886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000029ae cdw11:00000000 00:07:35.720 [2024-07-14 21:08:32.619899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.985 #35 NEW cov: 12013 ft: 14296 corp: 26/183b lim: 10 exec/s: 35 rss: 70Mb L: 8/10 MS: 1 ChangeBinInt- 00:07:35.985 [2024-07-14 21:08:32.659837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.659862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.659912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.659924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.659974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.659988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.660036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.660048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.660095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.660108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.985 #36 NEW cov: 12013 ft: 14304 corp: 27/193b lim: 10 exec/s: 36 rss: 70Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000p"- 00:07:35.985 [2024-07-14 21:08:32.709742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000900 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.709767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.709814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.709827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.985 #37 NEW cov: 12013 ft: 14333 corp: 28/198b lim: 10 exec/s: 37 rss: 70Mb L: 5/10 MS: 1 ChangeBinInt- 00:07:35.985 [2024-07-14 21:08:32.749937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.749961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.750012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.750025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.750071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.750084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.985 #38 NEW cov: 12013 ft: 14348 corp: 29/204b lim: 10 exec/s: 38 rss: 70Mb L: 6/10 MS: 1 InsertByte- 00:07:35.985 [2024-07-14 21:08:32.790014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.790039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.790087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.790101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.790149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.790162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.985 #39 NEW cov: 12013 ft: 14360 corp: 30/210b lim: 10 exec/s: 39 rss: 70Mb L: 6/10 MS: 1 ChangeBinInt- 00:07:35.985 [2024-07-14 21:08:32.840249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.840274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.840325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.840337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.840386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000070 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.840400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.840451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.840464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.985 #40 NEW cov: 12013 ft: 14369 corp: 31/218b lim: 10 exec/s: 40 rss: 71Mb L: 8/10 MS: 1 EraseBytes- 00:07:35.985 [2024-07-14 21:08:32.880384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.880410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.880462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.880475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.880522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.985 [2024-07-14 21:08:32.880535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.985 [2024-07-14 21:08:32.880582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000800 cdw11:00000000 00:07:35.986 [2024-07-14 21:08:32.880595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.244 #41 NEW cov: 12013 ft: 14387 corp: 32/227b lim: 10 exec/s: 41 rss: 71Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:36.244 [2024-07-14 21:08:32.930290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b3ae cdw11:00000000 00:07:36.244 [2024-07-14 21:08:32.930317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.244 [2024-07-14 21:08:32.930365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000029ae cdw11:00000000 00:07:36.244 [2024-07-14 21:08:32.930378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.244 #42 NEW cov: 12013 ft: 14388 corp: 33/231b lim: 10 exec/s: 42 rss: 71Mb L: 4/10 MS: 1 CrossOver- 00:07:36.244 [2024-07-14 21:08:32.970416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.244 [2024-07-14 21:08:32.970446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.244 [2024-07-14 21:08:32.970495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 00:07:36.244 [2024-07-14 21:08:32.970509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.244 #43 NEW cov: 12013 ft: 14403 corp: 34/236b lim: 10 exec/s: 43 rss: 71Mb L: 5/10 MS: 1 ChangeBinInt- 00:07:36.244 [2024-07-14 21:08:33.010750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.244 [2024-07-14 21:08:33.010775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.244 [2024-07-14 21:08:33.010827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000800 cdw11:00000000 00:07:36.244 [2024-07-14 21:08:33.010840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.244 [2024-07-14 21:08:33.010888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007070 cdw11:00000000 00:07:36.244 [2024-07-14 21:08:33.010901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.244 [2024-07-14 21:08:33.010947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:36.244 [2024-07-14 21:08:33.010960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.244 #44 NEW cov: 12013 ft: 14445 corp: 35/244b lim: 10 exec/s: 44 rss: 71Mb L: 8/10 MS: 1 CopyPart- 00:07:36.244 [2024-07-14 21:08:33.061024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.244 [2024-07-14 21:08:33.061049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.061098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.061111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.061159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.061173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.061237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.061250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.061299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000700a cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.061312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.245 #45 NEW cov: 12013 ft: 14448 corp: 36/254b lim: 10 exec/s: 45 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:07:36.245 [2024-07-14 21:08:33.101035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.101060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.101110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.101124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.101173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.101186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.101236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000cc70 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.101249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.245 #46 NEW cov: 12013 ft: 14468 corp: 37/263b lim: 10 exec/s: 46 rss: 71Mb L: 9/10 MS: 1 ChangeByte- 00:07:36.245 [2024-07-14 21:08:33.141098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.141124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.141175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.141189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.245 [2024-07-14 21:08:33.141240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002e0a cdw11:00000000 00:07:36.245 [2024-07-14 21:08:33.141254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.506 #47 NEW cov: 12013 ft: 14483 corp: 38/269b lim: 10 exec/s: 47 rss: 71Mb L: 6/10 MS: 1 InsertByte- 00:07:36.506 [2024-07-14 21:08:33.181338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002800 cdw11:00000000 00:07:36.506 [2024-07-14 21:08:33.181364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.506 [2024-07-14 21:08:33.181414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:07:36.506 [2024-07-14 21:08:33.181428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.506 [2024-07-14 21:08:33.181482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:36.506 [2024-07-14 21:08:33.181496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.506 [2024-07-14 21:08:33.181546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:36.506 [2024-07-14 21:08:33.181559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.506 #48 NEW cov: 12013 ft: 14489 corp: 39/278b lim: 10 exec/s: 48 rss: 71Mb L: 9/10 MS: 1 ChangeByte- 00:07:36.506 [2024-07-14 21:08:33.231391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:36.506 [2024-07-14 21:08:33.231417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.506 [2024-07-14 21:08:33.231472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.506 [2024-07-14 21:08:33.231486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.507 [2024-07-14 21:08:33.231539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ae29 cdw11:00000000 00:07:36.507 [2024-07-14 21:08:33.231553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.507 #49 NEW cov: 12013 ft: 14494 corp: 40/285b lim: 10 exec/s: 24 rss: 71Mb L: 7/10 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:36.507 #49 DONE cov: 12013 ft: 14494 corp: 40/285b lim: 10 exec/s: 24 rss: 71Mb 00:07:36.507 ###### Recommended dictionary. ###### 00:07:36.507 "\000\000\000\000\000\000\000p" # Uses: 2 00:07:36.507 "\000\000\000\000" # Uses: 2 00:07:36.507 "\001\000\000\000" # Uses: 0 00:07:36.507 ###### End of recommended dictionary. ###### 00:07:36.507 Done 49 runs in 2 second(s) 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.507 21:08:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:36.767 [2024-07-14 21:08:33.408939] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:36.767 [2024-07-14 21:08:33.409010] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020638 ] 00:07:36.767 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.767 [2024-07-14 21:08:33.669610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.025 [2024-07-14 21:08:33.698877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.025 [2024-07-14 21:08:33.751121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.025 [2024-07-14 21:08:33.767458] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:37.025 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.025 INFO: Seed: 3997815526 00:07:37.025 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:37.025 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:37.025 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:37.025 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.025 [2024-07-14 21:08:33.834448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.025 [2024-07-14 21:08:33.834494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.025 #2 INITED cov: 11797 ft: 11798 corp: 1/1b exec/s: 0 rss: 67Mb 00:07:37.025 [2024-07-14 21:08:33.885460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.025 [2024-07-14 21:08:33.885498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.025 [2024-07-14 21:08:33.885565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.025 [2024-07-14 21:08:33.885582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.025 [2024-07-14 21:08:33.885654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.025 [2024-07-14 21:08:33.885669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.025 [2024-07-14 21:08:33.885742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.025 [2024-07-14 21:08:33.885758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.025 #3 NEW cov: 11927 ft: 13154 corp: 2/5b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:37.284 [2024-07-14 21:08:33.955057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:33.955085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:33.955153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:33.955168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.284 #4 NEW cov: 11933 ft: 13496 corp: 3/7b lim: 5 exec/s: 0 rss: 68Mb L: 2/4 MS: 1 InsertByte- 00:07:37.284 [2024-07-14 21:08:34.005581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.005608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.005680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.005694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.005760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.005775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.284 #5 NEW cov: 12018 ft: 13875 corp: 4/10b lim: 5 exec/s: 0 rss: 68Mb L: 3/4 MS: 1 CrossOver- 00:07:37.284 [2024-07-14 21:08:34.076547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.076573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.076651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.076667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.076734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.076749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.076815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.076833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.076899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.076916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.284 #6 NEW cov: 12018 ft: 14105 corp: 5/15b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:37.284 [2024-07-14 21:08:34.126612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.126637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.126702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.126716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.126778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.126792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.126857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.126870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.126939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.126953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.284 #7 NEW cov: 12018 ft: 14136 corp: 6/20b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeByte- 00:07:37.284 [2024-07-14 21:08:34.186203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.186230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.186299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.186314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.284 [2024-07-14 21:08:34.186383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.284 [2024-07-14 21:08:34.186397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.544 #8 NEW cov: 12018 ft: 14230 corp: 7/23b lim: 5 exec/s: 0 rss: 68Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:37.544 [2024-07-14 21:08:34.255658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.255684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.544 #9 NEW cov: 12018 ft: 14396 corp: 8/24b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 EraseBytes- 00:07:37.544 [2024-07-14 21:08:34.307302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.307331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.544 [2024-07-14 21:08:34.307421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.307436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.544 [2024-07-14 21:08:34.307512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.307526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.544 [2024-07-14 21:08:34.307593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.307609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.544 [2024-07-14 21:08:34.307677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.307691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.544 #10 NEW cov: 12018 ft: 14424 corp: 9/29b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertByte- 00:07:37.544 [2024-07-14 21:08:34.366914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.366942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.544 [2024-07-14 21:08:34.367017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.367032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.544 [2024-07-14 21:08:34.367105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.367120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.544 #11 NEW cov: 12018 ft: 14493 corp: 10/32b lim: 5 exec/s: 0 rss: 69Mb L: 3/5 MS: 1 ChangeBit- 00:07:37.544 [2024-07-14 21:08:34.416688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.416716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.544 [2024-07-14 21:08:34.416788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.544 [2024-07-14 21:08:34.416803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.544 #12 NEW cov: 12018 ft: 14511 corp: 11/34b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:37.803 [2024-07-14 21:08:34.467955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.467984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.803 [2024-07-14 21:08:34.468063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.468078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.803 [2024-07-14 21:08:34.468142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.468156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.803 [2024-07-14 21:08:34.468223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.468238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.803 [2024-07-14 21:08:34.468304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.468318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.803 #13 NEW cov: 12018 ft: 14528 corp: 12/39b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ChangeByte- 00:07:37.803 [2024-07-14 21:08:34.518215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.518241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.803 [2024-07-14 21:08:34.518307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.518321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.803 [2024-07-14 21:08:34.518393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.518408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.803 [2024-07-14 21:08:34.518473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.803 [2024-07-14 21:08:34.518487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.804 [2024-07-14 21:08:34.518552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.804 [2024-07-14 21:08:34.518577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.804 #14 NEW cov: 12018 ft: 14587 corp: 13/44b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:07:37.804 [2024-07-14 21:08:34.577710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.804 [2024-07-14 21:08:34.577736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.804 [2024-07-14 21:08:34.577807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.804 [2024-07-14 21:08:34.577821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.804 [2024-07-14 21:08:34.577892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.804 [2024-07-14 21:08:34.577907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.804 #15 NEW cov: 12018 ft: 14602 corp: 14/47b lim: 5 exec/s: 0 rss: 69Mb L: 3/5 MS: 1 CrossOver- 00:07:37.804 [2024-07-14 21:08:34.627153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.804 [2024-07-14 21:08:34.627178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.804 #16 NEW cov: 12018 ft: 14639 corp: 15/48b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 CrossOver- 00:07:37.804 [2024-07-14 21:08:34.677594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.804 [2024-07-14 21:08:34.677619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.804 [2024-07-14 21:08:34.677681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:37.804 [2024-07-14 21:08:34.677695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.804 #17 NEW cov: 12018 ft: 14688 corp: 16/50b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 EraseBytes- 00:07:38.062 [2024-07-14 21:08:34.728053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.062 [2024-07-14 21:08:34.728079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.062 [2024-07-14 21:08:34.728149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.062 [2024-07-14 21:08:34.728163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.062 [2024-07-14 21:08:34.728234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.062 [2024-07-14 21:08:34.728247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.320 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:38.320 #18 NEW cov: 12041 ft: 14734 corp: 17/53b lim: 5 exec/s: 18 rss: 70Mb L: 3/5 MS: 1 ChangeBinInt- 00:07:38.320 [2024-07-14 21:08:35.058722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.320 [2024-07-14 21:08:35.058755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.058881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.058895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.059025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.059040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.059176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.059192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.059330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.059346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.321 #19 NEW cov: 12041 ft: 14871 corp: 18/58b lim: 5 exec/s: 19 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:07:38.321 [2024-07-14 21:08:35.107862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.107889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.108040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.108055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.108195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.108210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.321 #20 NEW cov: 12041 ft: 14944 corp: 19/61b lim: 5 exec/s: 20 rss: 70Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:38.321 [2024-07-14 21:08:35.147698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.147728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.147864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.147882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.321 #21 NEW cov: 12041 ft: 14956 corp: 20/63b lim: 5 exec/s: 21 rss: 70Mb L: 2/5 MS: 1 EraseBytes- 00:07:38.321 [2024-07-14 21:08:35.198810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.198837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.198968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.198985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.199113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.199130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.321 [2024-07-14 21:08:35.199258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.321 [2024-07-14 21:08:35.199275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.321 #22 NEW cov: 12041 ft: 14978 corp: 21/67b lim: 5 exec/s: 22 rss: 70Mb L: 4/5 MS: 1 CrossOver- 00:07:38.579 [2024-07-14 21:08:35.248408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.579 [2024-07-14 21:08:35.248435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.579 [2024-07-14 21:08:35.248576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.579 [2024-07-14 21:08:35.248594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.579 #23 NEW cov: 12041 ft: 15059 corp: 22/69b lim: 5 exec/s: 23 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:07:38.579 [2024-07-14 21:08:35.308620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.579 [2024-07-14 21:08:35.308647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.579 [2024-07-14 21:08:35.308774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.308792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.580 #24 NEW cov: 12041 ft: 15076 corp: 23/71b lim: 5 exec/s: 24 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:07:38.580 [2024-07-14 21:08:35.358983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.359010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.359142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.359157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.359282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.359298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.580 #25 NEW cov: 12041 ft: 15170 corp: 24/74b lim: 5 exec/s: 25 rss: 70Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:38.580 [2024-07-14 21:08:35.409455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.409483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.409611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.409629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.409776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.409793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.409929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.409947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.580 #26 NEW cov: 12041 ft: 15179 corp: 25/78b lim: 5 exec/s: 26 rss: 70Mb L: 4/5 MS: 1 ChangeBinInt- 00:07:38.580 [2024-07-14 21:08:35.459857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.459883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.460019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.460035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.460166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.460184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.460308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.460323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.580 [2024-07-14 21:08:35.460447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.580 [2024-07-14 21:08:35.460463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.838 #27 NEW cov: 12041 ft: 15192 corp: 26/83b lim: 5 exec/s: 27 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:07:38.838 [2024-07-14 21:08:35.529526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.838 [2024-07-14 21:08:35.529554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.838 [2024-07-14 21:08:35.529705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.838 [2024-07-14 21:08:35.529721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.838 [2024-07-14 21:08:35.529835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.838 [2024-07-14 21:08:35.529854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.838 #28 NEW cov: 12041 ft: 15222 corp: 27/86b lim: 5 exec/s: 28 rss: 70Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:38.838 [2024-07-14 21:08:35.579411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.838 [2024-07-14 21:08:35.579439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.838 [2024-07-14 21:08:35.579568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.838 [2024-07-14 21:08:35.579585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.838 #29 NEW cov: 12041 ft: 15229 corp: 28/88b lim: 5 exec/s: 29 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:07:38.838 [2024-07-14 21:08:35.640334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.640361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.640490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.640507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.640636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.640651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.640771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.640787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.640905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.640921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.839 #30 NEW cov: 12041 ft: 15243 corp: 29/93b lim: 5 exec/s: 30 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:07:38.839 [2024-07-14 21:08:35.689878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.689904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.690044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.690061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.690187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.690204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.690330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.690347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.839 #31 NEW cov: 12041 ft: 15267 corp: 30/97b lim: 5 exec/s: 31 rss: 70Mb L: 4/5 MS: 1 ChangeBit- 00:07:38.839 [2024-07-14 21:08:35.729840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.729865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.839 [2024-07-14 21:08:35.730000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.839 [2024-07-14 21:08:35.730017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.098 #32 NEW cov: 12041 ft: 15282 corp: 31/99b lim: 5 exec/s: 32 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:07:39.098 [2024-07-14 21:08:35.790035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.098 [2024-07-14 21:08:35.790061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.098 [2024-07-14 21:08:35.790198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.098 [2024-07-14 21:08:35.790214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.098 #33 NEW cov: 12041 ft: 15291 corp: 32/101b lim: 5 exec/s: 16 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:39.098 #33 DONE cov: 12041 ft: 15291 corp: 32/101b lim: 5 exec/s: 16 rss: 70Mb 00:07:39.098 Done 33 runs in 2 second(s) 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.098 21:08:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:39.098 [2024-07-14 21:08:35.981017] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:39.098 [2024-07-14 21:08:35.981081] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021165 ] 00:07:39.357 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.357 [2024-07-14 21:08:36.232860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.616 [2024-07-14 21:08:36.264053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.616 [2024-07-14 21:08:36.316189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.616 [2024-07-14 21:08:36.332516] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:39.616 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.616 INFO: Seed: 2268843468 00:07:39.616 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:39.616 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:39.616 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:39.616 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.616 [2024-07-14 21:08:36.377702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.616 [2024-07-14 21:08:36.377730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.616 #2 INITED cov: 11797 ft: 11798 corp: 1/1b exec/s: 0 rss: 68Mb 00:07:39.616 [2024-07-14 21:08:36.417854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.616 [2024-07-14 21:08:36.417880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.616 [2024-07-14 21:08:36.417933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.616 [2024-07-14 21:08:36.417949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.616 #3 NEW cov: 11927 ft: 13201 corp: 2/3b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CrossOver- 00:07:39.616 [2024-07-14 21:08:36.467816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.616 [2024-07-14 21:08:36.467840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.616 #4 NEW cov: 11933 ft: 13312 corp: 3/4b lim: 5 exec/s: 0 rss: 68Mb L: 1/2 MS: 1 ChangeBinInt- 00:07:39.616 [2024-07-14 21:08:36.508065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.616 [2024-07-14 21:08:36.508089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.616 [2024-07-14 21:08:36.508142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.616 [2024-07-14 21:08:36.508156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.875 #5 NEW cov: 12018 ft: 13631 corp: 4/6b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 ChangeBit- 00:07:39.875 [2024-07-14 21:08:36.558518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.558542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.875 [2024-07-14 21:08:36.558598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.558612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.875 [2024-07-14 21:08:36.558666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.558681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.875 [2024-07-14 21:08:36.558734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.558746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.875 #6 NEW cov: 12018 ft: 14035 corp: 5/10b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 CrossOver- 00:07:39.875 [2024-07-14 21:08:36.598167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.598191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.875 #7 NEW cov: 12018 ft: 14145 corp: 6/11b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:39.875 [2024-07-14 21:08:36.648449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.648474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.875 [2024-07-14 21:08:36.648528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.648543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.875 #8 NEW cov: 12018 ft: 14201 corp: 7/13b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 InsertByte- 00:07:39.875 [2024-07-14 21:08:36.688590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.688617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.875 [2024-07-14 21:08:36.688672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.688686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.875 #9 NEW cov: 12018 ft: 14236 corp: 8/15b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:39.875 [2024-07-14 21:08:36.728837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.728863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.875 [2024-07-14 21:08:36.728919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.728933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.875 [2024-07-14 21:08:36.728984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.875 [2024-07-14 21:08:36.728999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.875 #10 NEW cov: 12018 ft: 14452 corp: 9/18b lim: 5 exec/s: 0 rss: 69Mb L: 3/4 MS: 1 InsertByte- 00:07:40.134 [2024-07-14 21:08:36.778854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.134 [2024-07-14 21:08:36.778880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.134 [2024-07-14 21:08:36.778935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.134 [2024-07-14 21:08:36.778949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.134 #11 NEW cov: 12018 ft: 14498 corp: 10/20b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 InsertByte- 00:07:40.134 [2024-07-14 21:08:36.818794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.134 [2024-07-14 21:08:36.818820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.134 #12 NEW cov: 12018 ft: 14506 corp: 11/21b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ChangeByte- 00:07:40.134 [2024-07-14 21:08:36.869267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.134 [2024-07-14 21:08:36.869292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.134 [2024-07-14 21:08:36.869345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.134 [2024-07-14 21:08:36.869359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.134 [2024-07-14 21:08:36.869412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.134 [2024-07-14 21:08:36.869426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.134 #13 NEW cov: 12018 ft: 14528 corp: 12/24b lim: 5 exec/s: 0 rss: 69Mb L: 3/4 MS: 1 ChangeBit- 00:07:40.134 [2024-07-14 21:08:36.919377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.134 [2024-07-14 21:08:36.919402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.134 [2024-07-14 21:08:36.919459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.135 [2024-07-14 21:08:36.919473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.135 [2024-07-14 21:08:36.919529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.135 [2024-07-14 21:08:36.919543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.135 #14 NEW cov: 12018 ft: 14545 corp: 13/27b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 InsertByte- 00:07:40.135 [2024-07-14 21:08:36.959335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.135 [2024-07-14 21:08:36.959361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.135 [2024-07-14 21:08:36.959417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.135 [2024-07-14 21:08:36.959431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.135 #15 NEW cov: 12018 ft: 14583 corp: 14/29b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:40.135 [2024-07-14 21:08:36.999310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.135 [2024-07-14 21:08:36.999335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.135 #16 NEW cov: 12018 ft: 14649 corp: 15/30b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:07:40.393 [2024-07-14 21:08:37.039712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.039738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.039795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.039809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.039865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.039880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.393 #17 NEW cov: 12018 ft: 14727 corp: 16/33b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 CrossOver- 00:07:40.393 [2024-07-14 21:08:37.089551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.089577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.393 #18 NEW cov: 12018 ft: 14742 corp: 17/34b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:07:40.393 [2024-07-14 21:08:37.129923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.129948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.130002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.130016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.130071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.130084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.393 #19 NEW cov: 12018 ft: 14754 corp: 18/37b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 ChangeByte- 00:07:40.393 [2024-07-14 21:08:37.180085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.180110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.180163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.180176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.180233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.180246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.393 #20 NEW cov: 12018 ft: 14838 corp: 19/40b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:40.393 [2024-07-14 21:08:37.230109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.230138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.230195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.230208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.393 #21 NEW cov: 12018 ft: 14849 corp: 20/42b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:07:40.393 [2024-07-14 21:08:37.270357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.270381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.270435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.270452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.393 [2024-07-14 21:08:37.270506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.393 [2024-07-14 21:08:37.270519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.910 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:40.910 #22 NEW cov: 12041 ft: 14938 corp: 21/45b lim: 5 exec/s: 22 rss: 70Mb L: 3/4 MS: 1 InsertByte- 00:07:40.910 [2024-07-14 21:08:37.601752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.601786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.601843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.601858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.601915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.601929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.601986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.601999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.602053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.602066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.910 #23 NEW cov: 12041 ft: 14998 corp: 22/50b lim: 5 exec/s: 23 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:07:40.910 [2024-07-14 21:08:37.641070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.641095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.910 #24 NEW cov: 12041 ft: 15025 corp: 23/51b lim: 5 exec/s: 24 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:40.910 [2024-07-14 21:08:37.681506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.681531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.681588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.681603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.681658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.681671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.910 #25 NEW cov: 12041 ft: 15032 corp: 24/54b lim: 5 exec/s: 25 rss: 71Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:40.910 [2024-07-14 21:08:37.731643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.731668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.731724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.731738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.731793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.731806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.910 #26 NEW cov: 12041 ft: 15060 corp: 25/57b lim: 5 exec/s: 26 rss: 71Mb L: 3/5 MS: 1 InsertByte- 00:07:40.910 [2024-07-14 21:08:37.771735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.771760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.771819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.771833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.910 [2024-07-14 21:08:37.771892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.771906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.910 #27 NEW cov: 12041 ft: 15068 corp: 26/60b lim: 5 exec/s: 27 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:07:40.910 [2024-07-14 21:08:37.811513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.910 [2024-07-14 21:08:37.811539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.168 #28 NEW cov: 12041 ft: 15093 corp: 27/61b lim: 5 exec/s: 28 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:07:41.168 [2024-07-14 21:08:37.862133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.862162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.168 [2024-07-14 21:08:37.862219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.862233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.168 [2024-07-14 21:08:37.862304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.862319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.168 [2024-07-14 21:08:37.862377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.862390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.168 #29 NEW cov: 12041 ft: 15104 corp: 28/65b lim: 5 exec/s: 29 rss: 71Mb L: 4/5 MS: 1 InsertByte- 00:07:41.168 [2024-07-14 21:08:37.901970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.901994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.168 [2024-07-14 21:08:37.902053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.902066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.168 #30 NEW cov: 12041 ft: 15140 corp: 29/67b lim: 5 exec/s: 30 rss: 71Mb L: 2/5 MS: 1 EraseBytes- 00:07:41.168 [2024-07-14 21:08:37.952426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.952455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.168 [2024-07-14 21:08:37.952507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.952521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.168 [2024-07-14 21:08:37.952576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.168 [2024-07-14 21:08:37.952589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.169 [2024-07-14 21:08:37.952638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.169 [2024-07-14 21:08:37.952652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.169 #31 NEW cov: 12041 ft: 15156 corp: 30/71b lim: 5 exec/s: 31 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:41.169 [2024-07-14 21:08:38.002388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.169 [2024-07-14 21:08:38.002413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.169 [2024-07-14 21:08:38.002475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.169 [2024-07-14 21:08:38.002490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.169 [2024-07-14 21:08:38.002542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.169 [2024-07-14 21:08:38.002556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.169 #32 NEW cov: 12041 ft: 15174 corp: 31/74b lim: 5 exec/s: 32 rss: 71Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:41.169 [2024-07-14 21:08:38.042495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.169 [2024-07-14 21:08:38.042519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.169 [2024-07-14 21:08:38.042574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.169 [2024-07-14 21:08:38.042588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.169 [2024-07-14 21:08:38.042640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.169 [2024-07-14 21:08:38.042653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.426 #33 NEW cov: 12041 ft: 15190 corp: 32/77b lim: 5 exec/s: 33 rss: 71Mb L: 3/5 MS: 1 ChangeBinInt- 00:07:41.426 [2024-07-14 21:08:38.092481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.092507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.426 [2024-07-14 21:08:38.092561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.092574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.426 #34 NEW cov: 12041 ft: 15197 corp: 33/79b lim: 5 exec/s: 34 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:07:41.426 [2024-07-14 21:08:38.132670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.132694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.426 [2024-07-14 21:08:38.132749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.132763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.426 #35 NEW cov: 12041 ft: 15206 corp: 34/81b lim: 5 exec/s: 35 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:07:41.426 [2024-07-14 21:08:38.172947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.172972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.426 [2024-07-14 21:08:38.173028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.173045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.426 [2024-07-14 21:08:38.173098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.173112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.426 #36 NEW cov: 12041 ft: 15214 corp: 35/84b lim: 5 exec/s: 36 rss: 71Mb L: 3/5 MS: 1 InsertByte- 00:07:41.426 [2024-07-14 21:08:38.212849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.212875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.426 [2024-07-14 21:08:38.212933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.212948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.426 #37 NEW cov: 12041 ft: 15224 corp: 36/86b lim: 5 exec/s: 37 rss: 71Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:41.426 [2024-07-14 21:08:38.263163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.426 [2024-07-14 21:08:38.263188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.426 [2024-07-14 21:08:38.263245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.427 [2024-07-14 21:08:38.263259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.427 [2024-07-14 21:08:38.263314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.427 [2024-07-14 21:08:38.263327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.427 #38 NEW cov: 12041 ft: 15232 corp: 37/89b lim: 5 exec/s: 38 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:07:41.427 [2024-07-14 21:08:38.312984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.427 [2024-07-14 21:08:38.313009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.685 #39 NEW cov: 12041 ft: 15244 corp: 38/90b lim: 5 exec/s: 39 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:41.685 [2024-07-14 21:08:38.363453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.685 [2024-07-14 21:08:38.363478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.685 [2024-07-14 21:08:38.363536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.685 [2024-07-14 21:08:38.363550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.685 [2024-07-14 21:08:38.363603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.685 [2024-07-14 21:08:38.363617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.685 #40 NEW cov: 12041 ft: 15258 corp: 39/93b lim: 5 exec/s: 20 rss: 72Mb L: 3/5 MS: 1 ChangeByte- 00:07:41.685 #40 DONE cov: 12041 ft: 15258 corp: 39/93b lim: 5 exec/s: 20 rss: 72Mb 00:07:41.685 Done 40 runs in 2 second(s) 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.685 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.686 21:08:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:41.686 [2024-07-14 21:08:38.518818] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:41.686 [2024-07-14 21:08:38.518873] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021604 ] 00:07:41.686 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.944 [2024-07-14 21:08:38.697950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.944 [2024-07-14 21:08:38.719501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.944 [2024-07-14 21:08:38.771775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.944 [2024-07-14 21:08:38.788092] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:41.944 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.944 INFO: Seed: 429857522 00:07:41.944 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:41.944 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:41.944 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:41.944 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.944 #2 INITED exec/s: 0 rss: 63Mb 00:07:41.944 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.944 This may also happen if the target rejected all inputs we tried so far 00:07:41.944 [2024-07-14 21:08:38.832893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.944 [2024-07-14 21:08:38.832929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.944 [2024-07-14 21:08:38.832970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.944 [2024-07-14 21:08:38.832986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.944 [2024-07-14 21:08:38.833017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.944 [2024-07-14 21:08:38.833033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.944 [2024-07-14 21:08:38.833064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.944 [2024-07-14 21:08:38.833079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.458 NEW_FUNC[1/691]: 0x4a0820 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:42.458 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.458 #6 NEW cov: 11810 ft: 11817 corp: 2/38b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 4 CopyPart-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:42.458 [2024-07-14 21:08:39.173652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.173695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.458 [2024-07-14 21:08:39.173731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.173746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.458 [2024-07-14 21:08:39.173777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.173794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.458 #9 NEW cov: 11950 ft: 12953 corp: 3/62b lim: 40 exec/s: 0 rss: 70Mb L: 24/37 MS: 3 CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:07:42.458 [2024-07-14 21:08:39.233737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.233771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.458 [2024-07-14 21:08:39.233805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.233821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.458 [2024-07-14 21:08:39.233852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.233868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.458 [2024-07-14 21:08:39.233897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.233912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.458 #10 NEW cov: 11956 ft: 13186 corp: 4/99b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 ChangeBit- 00:07:42.458 [2024-07-14 21:08:39.313910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.313942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.458 [2024-07-14 21:08:39.313976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.313992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.458 [2024-07-14 21:08:39.314022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.458 [2024-07-14 21:08:39.314038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.458 #16 NEW cov: 12041 ft: 13385 corp: 5/125b lim: 40 exec/s: 0 rss: 70Mb L: 26/37 MS: 1 CrossOver- 00:07:42.716 [2024-07-14 21:08:39.364105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.364138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.364173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.364190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.364231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00800000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.364247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.364276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.364291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.716 #17 NEW cov: 12041 ft: 13422 corp: 6/162b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 ChangeBit- 00:07:42.716 [2024-07-14 21:08:39.414172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.414205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.414239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.414254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.414285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.414301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.716 #18 NEW cov: 12041 ft: 13480 corp: 7/188b lim: 40 exec/s: 0 rss: 70Mb L: 26/37 MS: 1 ShuffleBytes- 00:07:42.716 [2024-07-14 21:08:39.494342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.494378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.494411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.494425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.494461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.494493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.716 #19 NEW cov: 12041 ft: 13522 corp: 8/214b lim: 40 exec/s: 0 rss: 70Mb L: 26/37 MS: 1 ChangeBinInt- 00:07:42.716 [2024-07-14 21:08:39.544530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.544560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.544593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.544607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.544636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.544650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.544678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.544693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.716 #20 NEW cov: 12041 ft: 13574 corp: 9/252b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:42.716 [2024-07-14 21:08:39.594582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.594613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.594645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.594659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.716 [2024-07-14 21:08:39.594688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.716 [2024-07-14 21:08:39.594702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.973 #21 NEW cov: 12041 ft: 13606 corp: 10/278b lim: 40 exec/s: 0 rss: 70Mb L: 26/38 MS: 1 ChangeBinInt- 00:07:42.973 [2024-07-14 21:08:39.674912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.973 [2024-07-14 21:08:39.674945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.973 [2024-07-14 21:08:39.674980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.973 [2024-07-14 21:08:39.675000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.973 [2024-07-14 21:08:39.675031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.973 [2024-07-14 21:08:39.675048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.973 [2024-07-14 21:08:39.675078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00001a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.973 [2024-07-14 21:08:39.675093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.974 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:42.974 #22 NEW cov: 12064 ft: 13642 corp: 11/311b lim: 40 exec/s: 0 rss: 70Mb L: 33/38 MS: 1 CrossOver- 00:07:42.974 [2024-07-14 21:08:39.755073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.755107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.974 [2024-07-14 21:08:39.755141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.755158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.974 [2024-07-14 21:08:39.755189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.755205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.974 [2024-07-14 21:08:39.755248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.755264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.974 #23 NEW cov: 12064 ft: 13654 corp: 12/348b lim: 40 exec/s: 0 rss: 70Mb L: 37/38 MS: 1 ChangeByte- 00:07:42.974 [2024-07-14 21:08:39.805200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.805233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.974 [2024-07-14 21:08:39.805265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.805280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.974 [2024-07-14 21:08:39.805309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00008000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.805325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.974 [2024-07-14 21:08:39.805353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.974 [2024-07-14 21:08:39.805368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.974 #24 NEW cov: 12064 ft: 13670 corp: 13/386b lim: 40 exec/s: 24 rss: 70Mb L: 38/38 MS: 1 CopyPart- 00:07:43.232 [2024-07-14 21:08:39.885408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:39.885440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:39.885495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:39.885512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:39.885542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00003b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:39.885558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:39.885588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:39.885603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.232 #25 NEW cov: 12064 ft: 13676 corp: 14/424b lim: 40 exec/s: 25 rss: 70Mb L: 38/38 MS: 1 InsertByte- 00:07:43.232 [2024-07-14 21:08:39.966829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:39.966904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:39.967029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:39.967070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:39.967193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:39.967232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.232 #31 NEW cov: 12064 ft: 13853 corp: 15/450b lim: 40 exec/s: 31 rss: 70Mb L: 26/38 MS: 1 ShuffleBytes- 00:07:43.232 [2024-07-14 21:08:40.016617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.016644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:40.016703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.016718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:40.016776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:008f0080 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.016791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:40.016849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.016861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.232 #32 NEW cov: 12064 ft: 13893 corp: 16/489b lim: 40 exec/s: 32 rss: 70Mb L: 39/39 MS: 1 InsertByte- 00:07:43.232 [2024-07-14 21:08:40.066528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.066558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:40.066618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.066632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.232 #33 NEW cov: 12064 ft: 14181 corp: 17/511b lim: 40 exec/s: 33 rss: 70Mb L: 22/39 MS: 1 EraseBytes- 00:07:43.232 [2024-07-14 21:08:40.116756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.116784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:40.116845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.116859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.232 [2024-07-14 21:08:40.116917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000101a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.232 [2024-07-14 21:08:40.116930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.490 #34 NEW cov: 12064 ft: 14206 corp: 18/537b lim: 40 exec/s: 34 rss: 70Mb L: 26/39 MS: 1 ChangeBit- 00:07:43.490 [2024-07-14 21:08:40.167012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.490 [2024-07-14 21:08:40.167039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.490 [2024-07-14 21:08:40.167099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.167113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.167170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.167184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.167244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:101a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.167258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.491 #35 NEW cov: 12064 ft: 14223 corp: 19/573b lim: 40 exec/s: 35 rss: 70Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:07:43.491 [2024-07-14 21:08:40.217283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.217309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.217370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.217387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.217450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:008f0080 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.217464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.217522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.217535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.217594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.217607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.491 #36 NEW cov: 12064 ft: 14289 corp: 20/613b lim: 40 exec/s: 36 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:43.491 [2024-07-14 21:08:40.267160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:28040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.267186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.267244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.267257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.267314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000101a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.267328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.491 #37 NEW cov: 12064 ft: 14374 corp: 21/639b lim: 40 exec/s: 37 rss: 70Mb L: 26/40 MS: 1 ChangeByte- 00:07:43.491 [2024-07-14 21:08:40.307325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.307350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.307409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00380000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.307423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.307484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.307498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.491 #38 NEW cov: 12064 ft: 14459 corp: 22/665b lim: 40 exec/s: 38 rss: 70Mb L: 26/40 MS: 1 ChangeByte- 00:07:43.491 [2024-07-14 21:08:40.347518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.347543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.347602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.347619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.347676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00003b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.347689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.491 [2024-07-14 21:08:40.347744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.491 [2024-07-14 21:08:40.347759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.491 #39 NEW cov: 12064 ft: 14508 corp: 23/703b lim: 40 exec/s: 39 rss: 70Mb L: 38/40 MS: 1 ChangeASCIIInt- 00:07:43.750 [2024-07-14 21:08:40.397733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.397758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.397819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.397833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.397892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.397905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.397961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.397974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.750 #40 NEW cov: 12064 ft: 14519 corp: 24/740b lim: 40 exec/s: 40 rss: 70Mb L: 37/40 MS: 1 ShuffleBytes- 00:07:43.750 [2024-07-14 21:08:40.447813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.447838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.447897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.447911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.447969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000a0400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.447983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.448040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.448053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.750 #41 NEW cov: 12064 ft: 14548 corp: 25/774b lim: 40 exec/s: 41 rss: 71Mb L: 34/40 MS: 1 CrossOver- 00:07:43.750 [2024-07-14 21:08:40.497534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.497561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.750 #42 NEW cov: 12064 ft: 14887 corp: 26/788b lim: 40 exec/s: 42 rss: 71Mb L: 14/40 MS: 1 EraseBytes- 00:07:43.750 [2024-07-14 21:08:40.548079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a040000 cdw11:00000079 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.548105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.548167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:79797979 cdw11:79790000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.548181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.548240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.548254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.548313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00001a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.548326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.750 #43 NEW cov: 12064 ft: 14924 corp: 27/821b lim: 40 exec/s: 43 rss: 71Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:07:43.750 [2024-07-14 21:08:40.588067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.588093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.588153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.588167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.588222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.588237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.750 #44 NEW cov: 12064 ft: 14975 corp: 28/851b lim: 40 exec/s: 44 rss: 71Mb L: 30/40 MS: 1 InsertRepeatedBytes- 00:07:43.750 [2024-07-14 21:08:40.628304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.628328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.628390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.628404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.628464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00008000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.628479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.750 [2024-07-14 21:08:40.628543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.750 [2024-07-14 21:08:40.628557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.750 #45 NEW cov: 12064 ft: 14998 corp: 29/889b lim: 40 exec/s: 45 rss: 71Mb L: 38/40 MS: 1 ShuffleBytes- 00:07:44.008 [2024-07-14 21:08:40.668410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.008 [2024-07-14 21:08:40.668435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.008 [2024-07-14 21:08:40.668501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.008 [2024-07-14 21:08:40.668516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.008 [2024-07-14 21:08:40.668574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.008 [2024-07-14 21:08:40.668588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.008 [2024-07-14 21:08:40.668645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00001a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.668658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.009 #46 NEW cov: 12064 ft: 15003 corp: 30/922b lim: 40 exec/s: 46 rss: 71Mb L: 33/40 MS: 1 CrossOver- 00:07:44.009 [2024-07-14 21:08:40.718337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.718363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.009 [2024-07-14 21:08:40.718423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.718438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.009 #47 NEW cov: 12064 ft: 15035 corp: 31/945b lim: 40 exec/s: 47 rss: 71Mb L: 23/40 MS: 1 EraseBytes- 00:07:44.009 [2024-07-14 21:08:40.768685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.768710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.009 [2024-07-14 21:08:40.768774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.768787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.009 [2024-07-14 21:08:40.768847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.768861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.009 [2024-07-14 21:08:40.768919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.768936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.009 #48 NEW cov: 12064 ft: 15068 corp: 32/978b lim: 40 exec/s: 48 rss: 71Mb L: 33/40 MS: 1 ShuffleBytes- 00:07:44.009 [2024-07-14 21:08:40.818833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.818858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.009 [2024-07-14 21:08:40.818916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.818930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.009 [2024-07-14 21:08:40.818990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000a0400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.819005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.009 [2024-07-14 21:08:40.819060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.009 [2024-07-14 21:08:40.819073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.009 #49 NEW cov: 12064 ft: 15083 corp: 33/1012b lim: 40 exec/s: 24 rss: 71Mb L: 34/40 MS: 1 ChangeBit- 00:07:44.009 #49 DONE cov: 12064 ft: 15083 corp: 33/1012b lim: 40 exec/s: 24 rss: 71Mb 00:07:44.009 Done 49 runs in 2 second(s) 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.266 21:08:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:44.266 [2024-07-14 21:08:41.000181] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:44.266 [2024-07-14 21:08:41.000251] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022001 ] 00:07:44.266 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.524 [2024-07-14 21:08:41.180653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.524 [2024-07-14 21:08:41.202374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.524 [2024-07-14 21:08:41.254567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.524 [2024-07-14 21:08:41.270872] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:44.524 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.524 INFO: Seed: 2910373264 00:07:44.524 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:44.524 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:44.524 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:44.524 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.524 #2 INITED exec/s: 0 rss: 62Mb 00:07:44.524 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.524 This may also happen if the target rejected all inputs we tried so far 00:07:44.524 [2024-07-14 21:08:41.318453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.525 [2024-07-14 21:08:41.318489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.525 [2024-07-14 21:08:41.318524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.525 [2024-07-14 21:08:41.318539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.783 NEW_FUNC[1/692]: 0x4a2590 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:44.783 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:44.783 #3 NEW cov: 11832 ft: 11833 corp: 2/21b lim: 40 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:45.042 [2024-07-14 21:08:41.691022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.691067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.042 [2024-07-14 21:08:41.691214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.691235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.042 #4 NEW cov: 11962 ft: 12588 corp: 3/41b lim: 40 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ChangeBit- 00:07:45.042 [2024-07-14 21:08:41.751063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.751093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.042 [2024-07-14 21:08:41.751220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.751237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.042 [2024-07-14 21:08:41.751352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.751371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.042 #5 NEW cov: 11968 ft: 13055 corp: 4/70b lim: 40 exec/s: 0 rss: 69Mb L: 29/29 MS: 1 CrossOver- 00:07:45.042 [2024-07-14 21:08:41.791095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.791119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.042 [2024-07-14 21:08:41.791245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2dffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.791259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.042 #6 NEW cov: 12053 ft: 13256 corp: 5/91b lim: 40 exec/s: 0 rss: 69Mb L: 21/29 MS: 1 InsertByte- 00:07:45.042 [2024-07-14 21:08:41.841279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.841305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.042 [2024-07-14 21:08:41.841433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.042 [2024-07-14 21:08:41.841453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.042 #7 NEW cov: 12053 ft: 13355 corp: 6/111b lim: 40 exec/s: 0 rss: 69Mb L: 20/29 MS: 1 ShuffleBytes- 00:07:45.043 [2024-07-14 21:08:41.881881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.881906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.043 [2024-07-14 21:08:41.882042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.882057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.043 [2024-07-14 21:08:41.882189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.882203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.043 [2024-07-14 21:08:41.882323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.882340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.043 #8 NEW cov: 12053 ft: 13796 corp: 7/149b lim: 40 exec/s: 0 rss: 69Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:45.043 [2024-07-14 21:08:41.932086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.932112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.043 [2024-07-14 21:08:41.932250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.932266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.043 [2024-07-14 21:08:41.932408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.932422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.043 [2024-07-14 21:08:41.932538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.043 [2024-07-14 21:08:41.932553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.302 #9 NEW cov: 12053 ft: 13878 corp: 8/183b lim: 40 exec/s: 0 rss: 69Mb L: 34/38 MS: 1 CrossOver- 00:07:45.302 [2024-07-14 21:08:41.972223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:41.972249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:41.972385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:41.972401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:41.972527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:41.972555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:41.972687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:41.972702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.302 #10 NEW cov: 12053 ft: 13923 corp: 9/221b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 ShuffleBytes- 00:07:45.302 [2024-07-14 21:08:42.021789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2dffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.021818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.021951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.021968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.302 #11 NEW cov: 12053 ft: 14001 corp: 10/241b lim: 40 exec/s: 0 rss: 70Mb L: 20/38 MS: 1 ChangeByte- 00:07:45.302 [2024-07-14 21:08:42.062477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.062502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.062641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.062655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.062785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.062802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.062925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.062941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.302 #12 NEW cov: 12053 ft: 14036 corp: 11/280b lim: 40 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 InsertByte- 00:07:45.302 [2024-07-14 21:08:42.102673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.102700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.102853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:09ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.102869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.103001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.103018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.103154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.103171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.302 #13 NEW cov: 12053 ft: 14063 corp: 12/318b lim: 40 exec/s: 0 rss: 70Mb L: 38/39 MS: 1 ChangeBinInt- 00:07:45.302 [2024-07-14 21:08:42.152075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.152102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.152231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.152246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.302 #14 NEW cov: 12053 ft: 14131 corp: 13/339b lim: 40 exec/s: 0 rss: 70Mb L: 21/39 MS: 1 CrossOver- 00:07:45.302 [2024-07-14 21:08:42.202407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.202433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.302 [2024-07-14 21:08:42.202577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffbfff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.302 [2024-07-14 21:08:42.202593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.562 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:45.562 #15 NEW cov: 12076 ft: 14175 corp: 14/359b lim: 40 exec/s: 0 rss: 70Mb L: 20/39 MS: 1 ChangeBit- 00:07:45.562 [2024-07-14 21:08:42.243248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.243274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.243419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.243439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.243564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.243582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.243713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.243728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.243855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.243871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.562 #21 NEW cov: 12076 ft: 14263 corp: 15/399b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 InsertByte- 00:07:45.562 [2024-07-14 21:08:42.293397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.293423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.293572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.293586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.293717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.293735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.293859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.293874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.293997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.294013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.562 #22 NEW cov: 12076 ft: 14305 corp: 16/439b lim: 40 exec/s: 22 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:45.562 [2024-07-14 21:08:42.343018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.343044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.343172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff01 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.343188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.343315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.343334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.562 #23 NEW cov: 12076 ft: 14334 corp: 17/463b lim: 40 exec/s: 23 rss: 70Mb L: 24/40 MS: 1 CMP- DE: "\001\000\377\377"- 00:07:45.562 [2024-07-14 21:08:42.402975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.403001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.562 [2024-07-14 21:08:42.403140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffbfff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.562 [2024-07-14 21:08:42.403156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.562 #24 NEW cov: 12076 ft: 14449 corp: 18/483b lim: 40 exec/s: 24 rss: 70Mb L: 20/40 MS: 1 ShuffleBytes- 00:07:45.562 [2024-07-14 21:08:42.463071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.563 [2024-07-14 21:08:42.463098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.822 #25 NEW cov: 12076 ft: 15165 corp: 19/496b lim: 40 exec/s: 25 rss: 70Mb L: 13/40 MS: 1 CrossOver- 00:07:45.822 [2024-07-14 21:08:42.503069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.503097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.503221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffdb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.503247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.822 #26 NEW cov: 12076 ft: 15175 corp: 20/518b lim: 40 exec/s: 26 rss: 70Mb L: 22/40 MS: 1 InsertByte- 00:07:45.822 [2024-07-14 21:08:42.564042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.564070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.564205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:09ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.564221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.564347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:fffffeff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.564363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.564576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.564594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.822 #27 NEW cov: 12076 ft: 15214 corp: 21/556b lim: 40 exec/s: 27 rss: 70Mb L: 38/40 MS: 1 ChangeBit- 00:07:45.822 [2024-07-14 21:08:42.613418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.613446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.613589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffdbff cdw11:bfffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.613604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.822 #28 NEW cov: 12076 ft: 15241 corp: 22/573b lim: 40 exec/s: 28 rss: 70Mb L: 17/40 MS: 1 EraseBytes- 00:07:45.822 [2024-07-14 21:08:42.674592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.674618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.674743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.674758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.674885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.674901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.675033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.675049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.822 [2024-07-14 21:08:42.675180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.822 [2024-07-14 21:08:42.675196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.822 #29 NEW cov: 12076 ft: 15311 corp: 23/613b lim: 40 exec/s: 29 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:46.082 [2024-07-14 21:08:42.724422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.724454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.724580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.724596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.724726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.724743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.724868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.724883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.082 #30 NEW cov: 12076 ft: 15331 corp: 24/651b lim: 40 exec/s: 30 rss: 70Mb L: 38/40 MS: 1 ShuffleBytes- 00:07:46.082 [2024-07-14 21:08:42.763948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2dffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.763974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.764109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff5bff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.764126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.082 #31 NEW cov: 12076 ft: 15351 corp: 25/671b lim: 40 exec/s: 31 rss: 70Mb L: 20/40 MS: 1 ChangeByte- 00:07:46.082 [2024-07-14 21:08:42.814172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.814197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.814329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2cffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.814346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.082 #32 NEW cov: 12076 ft: 15358 corp: 26/691b lim: 40 exec/s: 32 rss: 70Mb L: 20/40 MS: 1 ChangeByte- 00:07:46.082 [2024-07-14 21:08:42.854245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.854271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.854400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffdb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.854416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.082 #33 NEW cov: 12076 ft: 15367 corp: 27/713b lim: 40 exec/s: 33 rss: 70Mb L: 22/40 MS: 1 ChangeBinInt- 00:07:46.082 [2024-07-14 21:08:42.895162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.895189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.895311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.895327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.895448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:2affff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.895465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.895586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00ffff00 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.895602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.895732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.895747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.082 #34 NEW cov: 12076 ft: 15383 corp: 28/753b lim: 40 exec/s: 34 rss: 70Mb L: 40/40 MS: 1 PersAutoDict- DE: "\001\000\377\377"- 00:07:46.082 [2024-07-14 21:08:42.944772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.944801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.944925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.944941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.082 [2024-07-14 21:08:42.945066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.082 [2024-07-14 21:08:42.945082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.082 #35 NEW cov: 12076 ft: 15391 corp: 29/777b lim: 40 exec/s: 35 rss: 70Mb L: 24/40 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:46.341 [2024-07-14 21:08:42.995510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:42.995537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.341 [2024-07-14 21:08:42.995659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:42.995674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.341 [2024-07-14 21:08:42.995796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:42.995812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.341 [2024-07-14 21:08:42.995935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:42.995951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.341 [2024-07-14 21:08:42.996078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:42.996093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.341 #36 NEW cov: 12076 ft: 15400 corp: 30/817b lim: 40 exec/s: 36 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:46.341 [2024-07-14 21:08:43.034131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:43.034158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.341 #37 NEW cov: 12076 ft: 15427 corp: 31/826b lim: 40 exec/s: 37 rss: 71Mb L: 9/40 MS: 1 EraseBytes- 00:07:46.341 [2024-07-14 21:08:43.084935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:43.084962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.341 [2024-07-14 21:08:43.085092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffdbff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:43.085107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.341 #38 NEW cov: 12076 ft: 15445 corp: 32/847b lim: 40 exec/s: 38 rss: 71Mb L: 21/40 MS: 1 CrossOver- 00:07:46.341 [2024-07-14 21:08:43.125023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.341 [2024-07-14 21:08:43.125053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.342 [2024-07-14 21:08:43.125184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffdb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.342 [2024-07-14 21:08:43.125200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.342 #39 NEW cov: 12076 ft: 15449 corp: 33/869b lim: 40 exec/s: 39 rss: 71Mb L: 22/40 MS: 1 ChangeBinInt- 00:07:46.342 [2024-07-14 21:08:43.165135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2dc1ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.342 [2024-07-14 21:08:43.165162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.342 [2024-07-14 21:08:43.165289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.342 [2024-07-14 21:08:43.165307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.342 #40 NEW cov: 12076 ft: 15480 corp: 34/890b lim: 40 exec/s: 40 rss: 71Mb L: 21/40 MS: 1 InsertByte- 00:07:46.342 [2024-07-14 21:08:43.205281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.342 [2024-07-14 21:08:43.205308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.342 [2024-07-14 21:08:43.205439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff41ffff cdw11:ffffffdb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.342 [2024-07-14 21:08:43.205460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.342 #41 NEW cov: 12076 ft: 15490 corp: 35/912b lim: 40 exec/s: 41 rss: 71Mb L: 22/40 MS: 1 ChangeByte- 00:07:46.601 [2024-07-14 21:08:43.245560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.601 [2024-07-14 21:08:43.245587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.601 [2024-07-14 21:08:43.245716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff0100ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.601 [2024-07-14 21:08:43.245730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.601 [2024-07-14 21:08:43.245851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.601 [2024-07-14 21:08:43.245867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.601 #42 NEW cov: 12076 ft: 15505 corp: 36/936b lim: 40 exec/s: 42 rss: 71Mb L: 24/40 MS: 1 CopyPart- 00:07:46.601 [2024-07-14 21:08:43.285590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.601 [2024-07-14 21:08:43.285617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.601 [2024-07-14 21:08:43.285743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.601 [2024-07-14 21:08:43.285760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.601 [2024-07-14 21:08:43.285895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:2affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.601 [2024-07-14 21:08:43.285912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.601 #43 NEW cov: 12076 ft: 15528 corp: 37/961b lim: 40 exec/s: 21 rss: 71Mb L: 25/40 MS: 1 EraseBytes- 00:07:46.601 #43 DONE cov: 12076 ft: 15528 corp: 37/961b lim: 40 exec/s: 21 rss: 71Mb 00:07:46.601 ###### Recommended dictionary. ###### 00:07:46.601 "\001\000\377\377" # Uses: 1 00:07:46.601 "\000\000\000\000" # Uses: 0 00:07:46.601 ###### End of recommended dictionary. ###### 00:07:46.601 Done 43 runs in 2 second(s) 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:46.601 21:08:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:46.601 [2024-07-14 21:08:43.453990] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:46.601 [2024-07-14 21:08:43.454077] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022529 ] 00:07:46.601 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.861 [2024-07-14 21:08:43.634803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.861 [2024-07-14 21:08:43.657064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.861 [2024-07-14 21:08:43.709255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.861 [2024-07-14 21:08:43.725572] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:46.861 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.861 INFO: Seed: 1070890749 00:07:46.861 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:46.861 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:46.861 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:46.861 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.861 #2 INITED exec/s: 0 rss: 63Mb 00:07:46.861 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.861 This may also happen if the target rejected all inputs we tried so far 00:07:47.119 [2024-07-14 21:08:43.770782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a82 cdw11:55f0addf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.119 [2024-07-14 21:08:43.770810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.377 NEW_FUNC[1/692]: 0x4a4300 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:47.377 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.377 #5 NEW cov: 11830 ft: 11831 corp: 2/10b lim: 40 exec/s: 0 rss: 69Mb L: 9/9 MS: 3 ChangeBit-ChangeBit-CMP- DE: "\001*\202U\360\255\337\032"- 00:07:47.377 [2024-07-14 21:08:44.101674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a82 cdw11:55f05320 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.377 [2024-07-14 21:08:44.101707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.377 #11 NEW cov: 11960 ft: 12359 corp: 3/19b lim: 40 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:47.377 [2024-07-14 21:08:44.151749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.377 [2024-07-14 21:08:44.151775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.377 #13 NEW cov: 11966 ft: 12729 corp: 4/32b lim: 40 exec/s: 0 rss: 69Mb L: 13/13 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:47.377 [2024-07-14 21:08:44.191818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.377 [2024-07-14 21:08:44.191844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.377 #14 NEW cov: 12051 ft: 12996 corp: 5/46b lim: 40 exec/s: 0 rss: 69Mb L: 14/14 MS: 1 InsertByte- 00:07:47.377 [2024-07-14 21:08:44.241980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f0addf cdw11:1a28012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.377 [2024-07-14 21:08:44.242006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.377 #16 NEW cov: 12051 ft: 13071 corp: 6/57b lim: 40 exec/s: 0 rss: 69Mb L: 11/14 MS: 2 EraseBytes-CrossOver- 00:07:47.635 [2024-07-14 21:08:44.282609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.282636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.282699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.282714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.282771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.282786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.282841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.282857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.635 #17 NEW cov: 12051 ft: 13965 corp: 7/95b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:47.635 [2024-07-14 21:08:44.332215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.332242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.635 #18 NEW cov: 12051 ft: 14043 corp: 8/110b lim: 40 exec/s: 0 rss: 70Mb L: 15/38 MS: 1 InsertByte- 00:07:47.635 [2024-07-14 21:08:44.382895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.382922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.382981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.382995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.383057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.383071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.383130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.383144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.635 #19 NEW cov: 12051 ft: 14086 corp: 9/149b lim: 40 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 InsertByte- 00:07:47.635 [2024-07-14 21:08:44.432554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a55 cdw11:208253f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.432581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.635 #20 NEW cov: 12051 ft: 14113 corp: 10/158b lim: 40 exec/s: 0 rss: 70Mb L: 9/39 MS: 1 ShuffleBytes- 00:07:47.635 [2024-07-14 21:08:44.472645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f0addf cdw11:1a28012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.472670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.635 #21 NEW cov: 12051 ft: 14140 corp: 11/170b lim: 40 exec/s: 0 rss: 70Mb L: 12/39 MS: 1 InsertByte- 00:07:47.635 [2024-07-14 21:08:44.523277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.523304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.523363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.523377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.523434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.523452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.635 [2024-07-14 21:08:44.523515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.635 [2024-07-14 21:08:44.523528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.995 #22 NEW cov: 12051 ft: 14181 corp: 12/208b lim: 40 exec/s: 0 rss: 70Mb L: 38/39 MS: 1 ChangeBinInt- 00:07:47.995 [2024-07-14 21:08:44.562901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.995 [2024-07-14 21:08:44.562927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.995 #23 NEW cov: 12051 ft: 14211 corp: 13/223b lim: 40 exec/s: 0 rss: 70Mb L: 15/39 MS: 1 CrossOver- 00:07:47.995 [2024-07-14 21:08:44.613043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012282 cdw11:55f0addf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.995 [2024-07-14 21:08:44.613069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.995 #24 NEW cov: 12051 ft: 14242 corp: 14/232b lim: 40 exec/s: 0 rss: 70Mb L: 9/39 MS: 1 ChangeBit- 00:07:47.995 [2024-07-14 21:08:44.653186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a82 cdw11:550af053 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.995 [2024-07-14 21:08:44.653214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.995 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:47.995 #25 NEW cov: 12074 ft: 14346 corp: 15/242b lim: 40 exec/s: 0 rss: 70Mb L: 10/39 MS: 1 CrossOver- 00:07:47.995 [2024-07-14 21:08:44.693272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f1addf cdw11:1a28012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.995 [2024-07-14 21:08:44.693300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.995 #26 NEW cov: 12074 ft: 14416 corp: 16/254b lim: 40 exec/s: 0 rss: 70Mb L: 12/39 MS: 1 ChangeBit- 00:07:47.995 [2024-07-14 21:08:44.743413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a55 cdw11:20825301 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.995 [2024-07-14 21:08:44.743440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.995 #27 NEW cov: 12074 ft: 14425 corp: 17/263b lim: 40 exec/s: 27 rss: 70Mb L: 9/39 MS: 1 CopyPart- 00:07:47.995 [2024-07-14 21:08:44.793538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012282 cdw11:55f0addf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.995 [2024-07-14 21:08:44.793565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.995 #28 NEW cov: 12074 ft: 14438 corp: 18/273b lim: 40 exec/s: 28 rss: 70Mb L: 10/39 MS: 1 InsertByte- 00:07:47.995 [2024-07-14 21:08:44.843702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f1ad5f cdw11:1a28012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.995 [2024-07-14 21:08:44.843728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.254 #29 NEW cov: 12074 ft: 14477 corp: 19/285b lim: 40 exec/s: 29 rss: 70Mb L: 12/39 MS: 1 ChangeBit- 00:07:48.254 [2024-07-14 21:08:44.894303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.894330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.254 [2024-07-14 21:08:44.894394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:002b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.894408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.254 [2024-07-14 21:08:44.894467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.894481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.254 [2024-07-14 21:08:44.894541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.894554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.254 #30 NEW cov: 12074 ft: 14486 corp: 20/324b lim: 40 exec/s: 30 rss: 70Mb L: 39/39 MS: 1 ChangeByte- 00:07:48.254 [2024-07-14 21:08:44.944482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.944507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.254 [2024-07-14 21:08:44.944569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.944583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.254 [2024-07-14 21:08:44.944641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.944655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.254 [2024-07-14 21:08:44.944713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.944727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.254 #31 NEW cov: 12074 ft: 14531 corp: 21/363b lim: 40 exec/s: 31 rss: 70Mb L: 39/39 MS: 1 ShuffleBytes- 00:07:48.254 [2024-07-14 21:08:44.984099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00060000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:44.984125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.254 #32 NEW cov: 12074 ft: 14558 corp: 22/378b lim: 40 exec/s: 32 rss: 70Mb L: 15/39 MS: 1 ChangeBinInt- 00:07:48.254 [2024-07-14 21:08:45.034233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f055f0 cdw11:ad28012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:45.034259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.254 #33 NEW cov: 12074 ft: 14581 corp: 23/389b lim: 40 exec/s: 33 rss: 70Mb L: 11/39 MS: 1 CopyPart- 00:07:48.254 [2024-07-14 21:08:45.074355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:94949494 cdw11:94949494 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:45.074380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.254 #37 NEW cov: 12074 ft: 14668 corp: 24/399b lim: 40 exec/s: 37 rss: 70Mb L: 10/39 MS: 4 InsertByte-CrossOver-EraseBytes-InsertRepeatedBytes- 00:07:48.254 [2024-07-14 21:08:45.114650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:94949494 cdw11:012a8255 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:45.114678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.254 [2024-07-14 21:08:45.114740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:f0addf1a cdw11:94949494 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.254 [2024-07-14 21:08:45.114755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.254 #38 NEW cov: 12074 ft: 14923 corp: 25/417b lim: 40 exec/s: 38 rss: 70Mb L: 18/39 MS: 1 PersAutoDict- DE: "\001*\202U\360\255\337\032"- 00:07:48.513 [2024-07-14 21:08:45.164619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a82 cdw11:55f0addb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.164644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.513 #39 NEW cov: 12074 ft: 14936 corp: 26/426b lim: 40 exec/s: 39 rss: 70Mb L: 9/39 MS: 1 ChangeBit- 00:07:48.513 [2024-07-14 21:08:45.204706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f1addf cdw11:1a28012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.204730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.513 #40 NEW cov: 12074 ft: 15012 corp: 27/438b lim: 40 exec/s: 40 rss: 70Mb L: 12/39 MS: 1 ShuffleBytes- 00:07:48.513 [2024-07-14 21:08:45.244837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:31f0addf cdw11:1a28012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.244862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.513 #41 NEW cov: 12074 ft: 15076 corp: 28/450b lim: 40 exec/s: 41 rss: 70Mb L: 12/39 MS: 1 ChangeByte- 00:07:48.513 [2024-07-14 21:08:45.284964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30012a82 cdw11:55f05320 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.284989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.513 #42 NEW cov: 12074 ft: 15108 corp: 29/459b lim: 40 exec/s: 42 rss: 70Mb L: 9/39 MS: 1 ChangeByte- 00:07:48.513 [2024-07-14 21:08:45.325588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.325612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.513 [2024-07-14 21:08:45.325671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.325685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.513 [2024-07-14 21:08:45.325741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.325754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.513 [2024-07-14 21:08:45.325809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0000002a cdw11:8255f053 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.325822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.513 #43 NEW cov: 12074 ft: 15115 corp: 30/493b lim: 40 exec/s: 43 rss: 70Mb L: 34/39 MS: 1 EraseBytes- 00:07:48.513 [2024-07-14 21:08:45.365164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a55 cdw11:208253f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.365190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.513 #44 NEW cov: 12074 ft: 15119 corp: 31/502b lim: 40 exec/s: 44 rss: 70Mb L: 9/39 MS: 1 CopyPart- 00:07:48.513 [2024-07-14 21:08:45.405292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.513 [2024-07-14 21:08:45.405317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.772 #45 NEW cov: 12074 ft: 15134 corp: 32/512b lim: 40 exec/s: 45 rss: 70Mb L: 10/39 MS: 1 EraseBytes- 00:07:48.772 [2024-07-14 21:08:45.445408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f17c00 cdw11:0000012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.772 [2024-07-14 21:08:45.445433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.773 [2024-07-14 21:08:45.495600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55f17c00 cdw11:0000012a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.495625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.773 #47 NEW cov: 12074 ft: 15176 corp: 33/524b lim: 40 exec/s: 47 rss: 70Mb L: 12/39 MS: 2 CMP-ShuffleBytes- DE: "|\000\000\000"- 00:07:48.773 [2024-07-14 21:08:45.535712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012082 cdw11:53012a55 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.535737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.773 #48 NEW cov: 12074 ft: 15223 corp: 34/537b lim: 40 exec/s: 48 rss: 71Mb L: 13/39 MS: 1 CopyPart- 00:07:48.773 [2024-07-14 21:08:45.585863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.585888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.773 #49 NEW cov: 12074 ft: 15244 corp: 35/552b lim: 40 exec/s: 49 rss: 71Mb L: 15/39 MS: 1 PersAutoDict- DE: "\001*\202U\360\255\337\032"- 00:07:48.773 [2024-07-14 21:08:45.626458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.626485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.773 [2024-07-14 21:08:45.626543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00012a82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.626557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.773 [2024-07-14 21:08:45.626617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:55f0addf cdw11:1a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.626631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.773 [2024-07-14 21:08:45.626686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.626699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.773 #50 NEW cov: 12074 ft: 15319 corp: 36/590b lim: 40 exec/s: 50 rss: 71Mb L: 38/39 MS: 1 PersAutoDict- DE: "\001*\202U\360\255\337\032"- 00:07:48.773 [2024-07-14 21:08:45.666051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012282 cdw11:55f0adcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.773 [2024-07-14 21:08:45.666076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.032 #51 NEW cov: 12074 ft: 15432 corp: 37/599b lim: 40 exec/s: 51 rss: 71Mb L: 9/39 MS: 1 ChangeBit- 00:07:49.032 [2024-07-14 21:08:45.706167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a55 cdw11:20825332 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.032 [2024-07-14 21:08:45.706192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.032 #52 NEW cov: 12074 ft: 15437 corp: 38/608b lim: 40 exec/s: 52 rss: 71Mb L: 9/39 MS: 1 ChangeByte- 00:07:49.032 [2024-07-14 21:08:45.746276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:28012a82 cdw11:557c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.032 [2024-07-14 21:08:45.746302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.032 #53 NEW cov: 12074 ft: 15510 corp: 39/621b lim: 40 exec/s: 26 rss: 71Mb L: 13/39 MS: 1 PersAutoDict- DE: "|\000\000\000"- 00:07:49.032 #53 DONE cov: 12074 ft: 15510 corp: 39/621b lim: 40 exec/s: 26 rss: 71Mb 00:07:49.032 ###### Recommended dictionary. ###### 00:07:49.032 "\001*\202U\360\255\337\032" # Uses: 3 00:07:49.032 "|\000\000\000" # Uses: 1 00:07:49.032 ###### End of recommended dictionary. ###### 00:07:49.032 Done 53 runs in 2 second(s) 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.032 21:08:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:49.032 [2024-07-14 21:08:45.916335] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:49.032 [2024-07-14 21:08:45.916390] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022865 ] 00:07:49.291 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.291 [2024-07-14 21:08:46.093515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.291 [2024-07-14 21:08:46.116569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.291 [2024-07-14 21:08:46.169238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.291 [2024-07-14 21:08:46.185560] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:49.550 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.550 INFO: Seed: 3529909852 00:07:49.550 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:49.550 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:49.550 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:49.550 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.550 #2 INITED exec/s: 0 rss: 62Mb 00:07:49.550 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.550 This may also happen if the target rejected all inputs we tried so far 00:07:49.550 [2024-07-14 21:08:46.254591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.550 [2024-07-14 21:08:46.254625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.809 NEW_FUNC[1/690]: 0x4a5ec0 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:49.809 NEW_FUNC[2/690]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:49.809 #4 NEW cov: 11815 ft: 11818 corp: 2/13b lim: 40 exec/s: 0 rss: 69Mb L: 12/12 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:49.809 [2024-07-14 21:08:46.585782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.809 [2024-07-14 21:08:46.585821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.809 [2024-07-14 21:08:46.585945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.809 [2024-07-14 21:08:46.585962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.809 [2024-07-14 21:08:46.586086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.809 [2024-07-14 21:08:46.586103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.809 NEW_FUNC[1/1]: 0x1d98bb0 in spdk_thread_is_exited /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:732 00:07:49.809 #10 NEW cov: 11948 ft: 12787 corp: 3/42b lim: 40 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:49.809 [2024-07-14 21:08:46.645906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.809 [2024-07-14 21:08:46.645936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.809 [2024-07-14 21:08:46.646059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.809 [2024-07-14 21:08:46.646077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.809 [2024-07-14 21:08:46.646199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.809 [2024-07-14 21:08:46.646219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.809 #11 NEW cov: 11954 ft: 12982 corp: 4/71b lim: 40 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 ChangeBinInt- 00:07:49.809 [2024-07-14 21:08:46.696447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.809 [2024-07-14 21:08:46.696472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.809 [2024-07-14 21:08:46.696594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1a1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.810 [2024-07-14 21:08:46.696613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.810 [2024-07-14 21:08:46.696741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.810 [2024-07-14 21:08:46.696756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.810 [2024-07-14 21:08:46.696884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1a1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.810 [2024-07-14 21:08:46.696901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.810 [2024-07-14 21:08:46.697032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1fff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.810 [2024-07-14 21:08:46.697047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.069 #12 NEW cov: 12039 ft: 13686 corp: 5/111b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:50.069 [2024-07-14 21:08:46.746587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.746612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.746735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.746752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.746878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.746894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.747018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1a1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.747034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.747171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1fff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.747186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.069 #13 NEW cov: 12039 ft: 13737 corp: 6/151b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:50.069 [2024-07-14 21:08:46.796277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.796305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.796449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.796465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.796588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.796606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.069 #14 NEW cov: 12039 ft: 13850 corp: 7/180b lim: 40 exec/s: 0 rss: 70Mb L: 29/40 MS: 1 ShuffleBytes- 00:07:50.069 [2024-07-14 21:08:46.836819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.836844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.836989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.837006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.837139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.837155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.837282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e01f1a1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.837300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.837426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1fff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.837440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.069 #15 NEW cov: 12039 ft: 13979 corp: 8/220b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:50.069 [2024-07-14 21:08:46.886753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.886779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.886903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffff1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.886919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.887040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1a cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.887054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.887185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.887205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.069 #16 NEW cov: 12039 ft: 14051 corp: 9/253b lim: 40 exec/s: 0 rss: 70Mb L: 33/40 MS: 1 CMP- DE: "\015\000\000\000"- 00:07:50.069 [2024-07-14 21:08:46.926520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.926544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.926679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.926695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.069 #17 NEW cov: 12039 ft: 14319 corp: 10/273b lim: 40 exec/s: 0 rss: 70Mb L: 20/40 MS: 1 EraseBytes- 00:07:50.069 [2024-07-14 21:08:46.967120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.967146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.967275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.967293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.069 [2024-07-14 21:08:46.967414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1a1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.069 [2024-07-14 21:08:46.967433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.070 [2024-07-14 21:08:46.967556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.070 [2024-07-14 21:08:46.967570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.328 #18 NEW cov: 12039 ft: 14390 corp: 11/306b lim: 40 exec/s: 0 rss: 70Mb L: 33/40 MS: 1 CopyPart- 00:07:50.328 [2024-07-14 21:08:47.016808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.328 [2024-07-14 21:08:47.016834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.328 [2024-07-14 21:08:47.016985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0100e0d7 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.328 [2024-07-14 21:08:47.017002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.328 #19 NEW cov: 12039 ft: 14422 corp: 12/326b lim: 40 exec/s: 0 rss: 70Mb L: 20/40 MS: 1 ChangeBinInt- 00:07:50.328 [2024-07-14 21:08:47.066767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.328 [2024-07-14 21:08:47.066793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.328 #20 NEW cov: 12039 ft: 14434 corp: 13/336b lim: 40 exec/s: 0 rss: 70Mb L: 10/40 MS: 1 EraseBytes- 00:07:50.328 [2024-07-14 21:08:47.107728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.107757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.107895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.107911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.108047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.108075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.108200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e01f1a1f cdw11:1b1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.108216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.108339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1fff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.108356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.329 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:50.329 #21 NEW cov: 12062 ft: 14474 corp: 14/376b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:50.329 [2024-07-14 21:08:47.157412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affdfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.157436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.157563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.157580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.157703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.157718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.329 #22 NEW cov: 12062 ft: 14531 corp: 15/405b lim: 40 exec/s: 0 rss: 70Mb L: 29/40 MS: 1 ChangeBit- 00:07:50.329 [2024-07-14 21:08:47.207942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.207968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.208092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1a1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.208108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.208238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.208253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.208381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1a1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.208400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.329 [2024-07-14 21:08:47.208531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1fef0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.329 [2024-07-14 21:08:47.208548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.329 #23 NEW cov: 12062 ft: 14539 corp: 16/445b lim: 40 exec/s: 23 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:50.588 [2024-07-14 21:08:47.247984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affdfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.248009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.248149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.248167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.248294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.248310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.248437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f05 cdw11:1660f557 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.248456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.588 #24 NEW cov: 12062 ft: 14551 corp: 17/482b lim: 40 exec/s: 24 rss: 70Mb L: 37/40 MS: 1 CMP- DE: "\005\026`\365W\202*\000"- 00:07:50.588 [2024-07-14 21:08:47.297962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.297987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.298135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffff1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.298151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.298275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1a cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.298291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.298412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.298427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.588 #25 NEW cov: 12062 ft: 14595 corp: 18/518b lim: 40 exec/s: 25 rss: 70Mb L: 36/40 MS: 1 CrossOver- 00:07:50.588 [2024-07-14 21:08:47.337952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affdfbf cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.337981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.338108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.338126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.588 [2024-07-14 21:08:47.338260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.338277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.588 #26 NEW cov: 12062 ft: 14608 corp: 19/547b lim: 40 exec/s: 26 rss: 70Mb L: 29/40 MS: 1 ChangeBit- 00:07:50.588 [2024-07-14 21:08:47.378058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.588 [2024-07-14 21:08:47.378085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.378222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.378240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.378367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f211f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.378383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.589 #27 NEW cov: 12062 ft: 14630 corp: 20/576b lim: 40 exec/s: 27 rss: 70Mb L: 29/40 MS: 1 ChangeByte- 00:07:50.589 [2024-07-14 21:08:47.418409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affdfbf cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.418437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.418555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.418571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.418695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.418709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.418835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:1f1f1fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.418852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.589 #28 NEW cov: 12062 ft: 14655 corp: 21/609b lim: 40 exec/s: 28 rss: 70Mb L: 33/40 MS: 1 CMP- DE: "\377\377\377\000"- 00:07:50.589 [2024-07-14 21:08:47.468776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affdfbf cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.468803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.468939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.468959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.469084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.469100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.469228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000001f cdw11:1f1f1fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.469243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.589 [2024-07-14 21:08:47.469371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffff001f cdw11:1f1fff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.589 [2024-07-14 21:08:47.469387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.849 #34 NEW cov: 12062 ft: 14708 corp: 22/649b lim: 40 exec/s: 34 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:50.849 [2024-07-14 21:08:47.518666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.518692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.518813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffff1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.518828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.518961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1a cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.518978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.519097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f241f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.519113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.849 #35 NEW cov: 12062 ft: 14744 corp: 23/686b lim: 40 exec/s: 35 rss: 71Mb L: 37/40 MS: 1 InsertByte- 00:07:50.849 [2024-07-14 21:08:47.568170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00021b4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.568196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.849 #37 NEW cov: 12062 ft: 14785 corp: 24/696b lim: 40 exec/s: 37 rss: 71Mb L: 10/40 MS: 2 CopyPart-CMP- DE: "\001\000\000\000\002\033J\323"- 00:07:50.849 [2024-07-14 21:08:47.608255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:ff1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.608280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.849 #38 NEW cov: 12062 ft: 14797 corp: 25/706b lim: 40 exec/s: 38 rss: 71Mb L: 10/40 MS: 1 ShuffleBytes- 00:07:50.849 [2024-07-14 21:08:47.659314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.659340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.659487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1a1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.659507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.659639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.659656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.659798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1a1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.659814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.659947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1b1fef0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.659964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.849 #39 NEW cov: 12062 ft: 14805 corp: 26/746b lim: 40 exec/s: 39 rss: 71Mb L: 40/40 MS: 1 ChangeBit- 00:07:50.849 [2024-07-14 21:08:47.708619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0aff0516 cdw11:60f557ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.708644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.849 #40 NEW cov: 12062 ft: 14823 corp: 27/758b lim: 40 exec/s: 40 rss: 71Mb L: 12/40 MS: 1 CrossOver- 00:07:50.849 [2024-07-14 21:08:47.748850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.748876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.849 [2024-07-14 21:08:47.749012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.849 [2024-07-14 21:08:47.749029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 #41 NEW cov: 12062 ft: 14838 corp: 28/778b lim: 40 exec/s: 41 rss: 71Mb L: 20/40 MS: 1 ChangeBit- 00:07:51.109 [2024-07-14 21:08:47.789346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affdfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.789372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.789502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff1f1f cdw11:1f1f1f1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.789519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.789650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1d1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.789667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.789793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f05 cdw11:1660f557 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.789807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.109 #42 NEW cov: 12062 ft: 14863 corp: 29/815b lim: 40 exec/s: 42 rss: 71Mb L: 37/40 MS: 1 ChangeBit- 00:07:51.109 [2024-07-14 21:08:47.839603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.839629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.839754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000080ff cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.839769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.839900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1a1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.839916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.840040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.840056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.109 #43 NEW cov: 12062 ft: 14885 corp: 30/848b lim: 40 exec/s: 43 rss: 71Mb L: 33/40 MS: 1 ChangeBit- 00:07:51.109 [2024-07-14 21:08:47.889460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.889487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.889620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:001f1f1f cdw11:241f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.889637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.889765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1fff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.889781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 #44 NEW cov: 12062 ft: 14924 corp: 31/872b lim: 40 exec/s: 44 rss: 71Mb L: 24/40 MS: 1 EraseBytes- 00:07:51.109 [2024-07-14 21:08:47.939336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:e0ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.939361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 #45 NEW cov: 12062 ft: 14977 corp: 32/884b lim: 40 exec/s: 45 rss: 71Mb L: 12/40 MS: 1 ChangeByte- 00:07:51.109 [2024-07-14 21:08:47.980189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.980214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.980346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.980362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.980493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00e0e0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.980511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.980633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e03f1a1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.980650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.109 [2024-07-14 21:08:47.980784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1fff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.109 [2024-07-14 21:08:47.980799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.109 #46 NEW cov: 12062 ft: 14987 corp: 33/924b lim: 40 exec/s: 46 rss: 71Mb L: 40/40 MS: 1 ChangeBit- 00:07:51.368 [2024-07-14 21:08:48.019490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:e0ffff74 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.019515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 #47 NEW cov: 12062 ft: 14996 corp: 34/937b lim: 40 exec/s: 47 rss: 71Mb L: 13/40 MS: 1 InsertByte- 00:07:51.369 [2024-07-14 21:08:48.070541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affff1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.070567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 [2024-07-14 21:08:48.070699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1a1f1f1f cdw11:1fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.070715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.369 [2024-07-14 21:08:48.070838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.070853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.369 [2024-07-14 21:08:48.070976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1a1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.070992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.369 [2024-07-14 21:08:48.071121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.071137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.369 #48 NEW cov: 12062 ft: 15021 corp: 35/977b lim: 40 exec/s: 48 rss: 71Mb L: 40/40 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:51.369 [2024-07-14 21:08:48.109734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1fff1f1f cdw11:0a1fffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.109762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 #49 NEW cov: 12062 ft: 15026 corp: 36/987b lim: 40 exec/s: 49 rss: 72Mb L: 10/40 MS: 1 ShuffleBytes- 00:07:51.369 [2024-07-14 21:08:48.149919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff1f1f1f cdw11:ff0aff1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.149945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 #50 NEW cov: 12062 ft: 15038 corp: 37/997b lim: 40 exec/s: 50 rss: 72Mb L: 10/40 MS: 1 ShuffleBytes- 00:07:51.369 [2024-07-14 21:08:48.190010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a002a82 cdw11:590e371c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.190035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 #55 NEW cov: 12062 ft: 15043 corp: 38/1007b lim: 40 exec/s: 55 rss: 72Mb L: 10/40 MS: 5 CopyPart-InsertByte-ShuffleBytes-ShuffleBytes-CMP- DE: "\000*\202Y\0167\034$"- 00:07:51.369 [2024-07-14 21:08:48.230709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:e0000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.230736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 [2024-07-14 21:08:48.230854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.230870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.369 [2024-07-14 21:08:48.230997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffff74 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.231013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.369 [2024-07-14 21:08:48.231141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-07-14 21:08:48.231158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.369 #56 NEW cov: 12062 ft: 15053 corp: 39/1040b lim: 40 exec/s: 28 rss: 72Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:07:51.369 #56 DONE cov: 12062 ft: 15053 corp: 39/1040b lim: 40 exec/s: 28 rss: 72Mb 00:07:51.369 ###### Recommended dictionary. ###### 00:07:51.369 "\015\000\000\000" # Uses: 0 00:07:51.369 "\005\026`\365W\202*\000" # Uses: 0 00:07:51.369 "\377\377\377\000" # Uses: 1 00:07:51.369 "\001\000\000\000\002\033J\323" # Uses: 0 00:07:51.369 "\000*\202Y\0167\034$" # Uses: 0 00:07:51.369 ###### End of recommended dictionary. ###### 00:07:51.369 Done 56 runs in 2 second(s) 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:51.629 21:08:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:51.629 [2024-07-14 21:08:48.396666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:51.629 [2024-07-14 21:08:48.396718] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4023354 ] 00:07:51.629 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.888 [2024-07-14 21:08:48.566822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.888 [2024-07-14 21:08:48.588357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.888 [2024-07-14 21:08:48.640500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.888 [2024-07-14 21:08:48.656794] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:51.888 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.888 INFO: Seed: 1708929040 00:07:51.888 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:51.888 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:51.888 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:51.888 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.888 #2 INITED exec/s: 0 rss: 62Mb 00:07:51.888 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.888 This may also happen if the target rejected all inputs we tried so far 00:07:51.888 [2024-07-14 21:08:48.701594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-07-14 21:08:48.701630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.889 [2024-07-14 21:08:48.701662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.889 [2024-07-14 21:08:48.701677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.148 NEW_FUNC[1/694]: 0x4a7a80 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:52.148 NEW_FUNC[2/694]: 0x4c8f40 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:52.148 #8 NEW cov: 11845 ft: 11846 corp: 2/28b lim: 35 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:07:52.148 [2024-07-14 21:08:49.042968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.148 [2024-07-14 21:08:49.043012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.148 [2024-07-14 21:08:49.043043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.148 [2024-07-14 21:08:49.043058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.407 #14 NEW cov: 11975 ft: 12316 corp: 3/55b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 CMP- DE: "\000\000"- 00:07:52.407 [2024-07-14 21:08:49.123053] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.407 [2024-07-14 21:08:49.123090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.407 [2024-07-14 21:08:49.123121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.407 [2024-07-14 21:08:49.123136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.407 #20 NEW cov: 11981 ft: 12557 corp: 4/78b lim: 35 exec/s: 0 rss: 70Mb L: 23/27 MS: 1 CrossOver- 00:07:52.407 [2024-07-14 21:08:49.173185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.407 [2024-07-14 21:08:49.173217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.407 [2024-07-14 21:08:49.173248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.407 [2024-07-14 21:08:49.173263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.407 #21 NEW cov: 12066 ft: 12782 corp: 5/105b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 ChangeBinInt- 00:07:52.407 [2024-07-14 21:08:49.253354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.407 [2024-07-14 21:08:49.253386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.407 [2024-07-14 21:08:49.253417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.407 [2024-07-14 21:08:49.253432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.666 #22 NEW cov: 12066 ft: 12867 corp: 6/128b lim: 35 exec/s: 0 rss: 70Mb L: 23/27 MS: 1 ChangeByte- 00:07:52.666 [2024-07-14 21:08:49.333534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.333567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.666 [2024-07-14 21:08:49.333600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.333616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.666 #23 NEW cov: 12073 ft: 13047 corp: 7/148b lim: 35 exec/s: 0 rss: 70Mb L: 20/27 MS: 1 InsertRepeatedBytes- 00:07:52.666 [2024-07-14 21:08:49.383747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.383778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.666 [2024-07-14 21:08:49.383809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.383825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.666 #24 NEW cov: 12073 ft: 13171 corp: 8/175b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:52.666 [2024-07-14 21:08:49.433896] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.433926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.666 [2024-07-14 21:08:49.433957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.433976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.666 #25 NEW cov: 12073 ft: 13294 corp: 9/202b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 CopyPart- 00:07:52.666 [2024-07-14 21:08:49.483974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.484004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.666 [2024-07-14 21:08:49.484035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.484049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.666 #26 NEW cov: 12073 ft: 13305 corp: 10/229b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 ChangeByte- 00:07:52.666 [2024-07-14 21:08:49.564232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.564263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.666 [2024-07-14 21:08:49.564297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.666 [2024-07-14 21:08:49.564312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.926 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:52.926 #27 NEW cov: 12090 ft: 13420 corp: 11/256b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:52.926 [2024-07-14 21:08:49.644478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.644508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.926 [2024-07-14 21:08:49.644539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.644554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.926 [2024-07-14 21:08:49.644582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.644597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.926 #28 NEW cov: 12090 ft: 13692 corp: 12/285b lim: 35 exec/s: 28 rss: 70Mb L: 29/29 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:52.926 [2024-07-14 21:08:49.724682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.724713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.926 [2024-07-14 21:08:49.724744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.724759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.926 [2024-07-14 21:08:49.724788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.724803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.926 #29 NEW cov: 12090 ft: 13716 corp: 13/316b lim: 35 exec/s: 29 rss: 70Mb L: 31/31 MS: 1 CMP- DE: "\377\377\377\004"- 00:07:52.926 [2024-07-14 21:08:49.774752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.774790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.926 [2024-07-14 21:08:49.774822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.774837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.926 [2024-07-14 21:08:49.774865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.926 [2024-07-14 21:08:49.774880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.186 #30 NEW cov: 12090 ft: 13755 corp: 14/341b lim: 35 exec/s: 30 rss: 70Mb L: 25/31 MS: 1 CopyPart- 00:07:53.186 [2024-07-14 21:08:49.854923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.186 [2024-07-14 21:08:49.854954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.186 [2024-07-14 21:08:49.854986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.186 [2024-07-14 21:08:49.855000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.186 [2024-07-14 21:08:49.855029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.186 [2024-07-14 21:08:49.855043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.186 #31 NEW cov: 12090 ft: 13793 corp: 15/368b lim: 35 exec/s: 31 rss: 70Mb L: 27/31 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:53.186 [2024-07-14 21:08:49.935152] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.186 [2024-07-14 21:08:49.935182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.186 [2024-07-14 21:08:49.935214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000072 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.186 [2024-07-14 21:08:49.935230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.186 #32 NEW cov: 12090 ft: 13810 corp: 16/395b lim: 35 exec/s: 32 rss: 70Mb L: 27/31 MS: 1 ChangeByte- 00:07:53.186 [2024-07-14 21:08:49.985276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.186 [2024-07-14 21:08:49.985306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.186 [2024-07-14 21:08:49.985337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.186 [2024-07-14 21:08:49.985351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.186 #33 NEW cov: 12090 ft: 13830 corp: 17/422b lim: 35 exec/s: 33 rss: 70Mb L: 27/31 MS: 1 ShuffleBytes- 00:07:53.186 #34 NEW cov: 12090 ft: 14534 corp: 18/431b lim: 35 exec/s: 34 rss: 70Mb L: 9/31 MS: 1 CrossOver- 00:07:53.445 [2024-07-14 21:08:50.095655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:4 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.095696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.445 [2024-07-14 21:08:50.095730] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.095750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.445 [2024-07-14 21:08:50.095781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.095797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.445 NEW_FUNC[1/1]: 0x4c7910 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:07:53.445 #35 NEW cov: 12114 ft: 14578 corp: 19/454b lim: 35 exec/s: 35 rss: 70Mb L: 23/31 MS: 1 ChangeBit- 00:07:53.445 [2024-07-14 21:08:50.145708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.145743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.445 #36 NEW cov: 12114 ft: 14729 corp: 20/469b lim: 35 exec/s: 36 rss: 70Mb L: 15/31 MS: 1 EraseBytes- 00:07:53.445 [2024-07-14 21:08:50.225987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.226019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.445 [2024-07-14 21:08:50.226051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.226066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.445 [2024-07-14 21:08:50.226096] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.226112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.445 #37 NEW cov: 12114 ft: 14757 corp: 21/500b lim: 35 exec/s: 37 rss: 70Mb L: 31/31 MS: 1 ShuffleBytes- 00:07:53.445 [2024-07-14 21:08:50.306145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.306178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.445 [2024-07-14 21:08:50.306211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.306227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.445 [2024-07-14 21:08:50.306257] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.445 [2024-07-14 21:08:50.306273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.445 #38 NEW cov: 12114 ft: 14787 corp: 22/527b lim: 35 exec/s: 38 rss: 70Mb L: 27/31 MS: 1 ShuffleBytes- 00:07:53.704 [2024-07-14 21:08:50.356297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.704 [2024-07-14 21:08:50.356330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.704 [2024-07-14 21:08:50.356364] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.704 [2024-07-14 21:08:50.356380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.704 #39 NEW cov: 12114 ft: 14796 corp: 23/551b lim: 35 exec/s: 39 rss: 70Mb L: 24/31 MS: 1 EraseBytes- 00:07:53.704 #40 NEW cov: 12114 ft: 14807 corp: 24/560b lim: 35 exec/s: 40 rss: 70Mb L: 9/31 MS: 1 CrossOver- 00:07:53.704 [2024-07-14 21:08:50.456508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.704 [2024-07-14 21:08:50.456541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.704 [2024-07-14 21:08:50.456572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.704 [2024-07-14 21:08:50.456588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.704 #41 NEW cov: 12114 ft: 14829 corp: 25/583b lim: 35 exec/s: 41 rss: 70Mb L: 23/31 MS: 1 EraseBytes- 00:07:53.704 [2024-07-14 21:08:50.506630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.704 [2024-07-14 21:08:50.506662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.704 [2024-07-14 21:08:50.506695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.704 [2024-07-14 21:08:50.506711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.704 [2024-07-14 21:08:50.506740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.704 [2024-07-14 21:08:50.506754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.704 #42 NEW cov: 12114 ft: 14858 corp: 26/610b lim: 35 exec/s: 42 rss: 70Mb L: 27/31 MS: 1 ChangeByte- 00:07:53.964 #45 NEW cov: 12121 ft: 14888 corp: 27/618b lim: 35 exec/s: 45 rss: 70Mb L: 8/31 MS: 3 CrossOver-ShuffleBytes-CrossOver- 00:07:53.964 [2024-07-14 21:08:50.667092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.964 [2024-07-14 21:08:50.667125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.964 [2024-07-14 21:08:50.667157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.964 [2024-07-14 21:08:50.667172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.964 [2024-07-14 21:08:50.667200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.964 [2024-07-14 21:08:50.667215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.964 [2024-07-14 21:08:50.667243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.964 [2024-07-14 21:08:50.667258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.964 #46 NEW cov: 12121 ft: 15042 corp: 28/648b lim: 35 exec/s: 23 rss: 70Mb L: 30/31 MS: 1 InsertRepeatedBytes- 00:07:53.964 #46 DONE cov: 12121 ft: 15042 corp: 28/648b lim: 35 exec/s: 23 rss: 70Mb 00:07:53.964 ###### Recommended dictionary. ###### 00:07:53.964 "\000\000" # Uses: 5 00:07:53.964 "\377\377\377\004" # Uses: 0 00:07:53.964 ###### End of recommended dictionary. ###### 00:07:53.964 Done 46 runs in 2 second(s) 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:53.964 21:08:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:53.964 [2024-07-14 21:08:50.856631] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:53.964 [2024-07-14 21:08:50.856724] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4023888 ] 00:07:54.223 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.223 [2024-07-14 21:08:51.029700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.223 [2024-07-14 21:08:51.050926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.223 [2024-07-14 21:08:51.103078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.223 [2024-07-14 21:08:51.119396] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:54.481 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.481 INFO: Seed: 4170925770 00:07:54.481 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:54.481 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:54.481 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:54.481 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.481 #2 INITED exec/s: 0 rss: 63Mb 00:07:54.481 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:54.481 This may also happen if the target rejected all inputs we tried so far 00:07:54.481 [2024-07-14 21:08:51.164488] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.481 [2024-07-14 21:08:51.164516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.740 NEW_FUNC[1/691]: 0x4a8fc0 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:54.740 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:54.740 #9 NEW cov: 11800 ft: 11801 corp: 2/12b lim: 35 exec/s: 0 rss: 70Mb L: 11/11 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:54.740 [2024-07-14 21:08:51.475316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.740 [2024-07-14 21:08:51.475353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.740 #10 NEW cov: 11930 ft: 12261 corp: 3/24b lim: 35 exec/s: 0 rss: 70Mb L: 12/12 MS: 1 CrossOver- 00:07:54.740 [2024-07-14 21:08:51.525451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.740 [2024-07-14 21:08:51.525479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.740 #11 NEW cov: 11936 ft: 12520 corp: 4/36b lim: 35 exec/s: 0 rss: 70Mb L: 12/12 MS: 1 ChangeBinInt- 00:07:54.740 [2024-07-14 21:08:51.575577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.740 [2024-07-14 21:08:51.575603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.740 #17 NEW cov: 12021 ft: 12854 corp: 5/48b lim: 35 exec/s: 0 rss: 70Mb L: 12/12 MS: 1 ChangeBinInt- 00:07:54.740 [2024-07-14 21:08:51.625731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.740 [2024-07-14 21:08:51.625757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.999 #18 NEW cov: 12021 ft: 12977 corp: 6/56b lim: 35 exec/s: 0 rss: 70Mb L: 8/12 MS: 1 EraseBytes- 00:07:54.999 [2024-07-14 21:08:51.665861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.999 [2024-07-14 21:08:51.665887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.999 #19 NEW cov: 12021 ft: 13036 corp: 7/68b lim: 35 exec/s: 0 rss: 70Mb L: 12/12 MS: 1 ChangeByte- 00:07:54.999 [2024-07-14 21:08:51.716086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.999 [2024-07-14 21:08:51.716112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.999 [2024-07-14 21:08:51.716171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.999 [2024-07-14 21:08:51.716185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.999 #20 NEW cov: 12021 ft: 13468 corp: 8/83b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 CrossOver- 00:07:54.999 [2024-07-14 21:08:51.756067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.999 [2024-07-14 21:08:51.756094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.999 #21 NEW cov: 12021 ft: 13547 corp: 9/94b lim: 35 exec/s: 0 rss: 70Mb L: 11/15 MS: 1 EraseBytes- 00:07:54.999 [2024-07-14 21:08:51.796226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.999 [2024-07-14 21:08:51.796251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.999 #22 NEW cov: 12021 ft: 13617 corp: 10/107b lim: 35 exec/s: 0 rss: 70Mb L: 13/15 MS: 1 CopyPart- 00:07:54.999 [2024-07-14 21:08:51.846351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.999 [2024-07-14 21:08:51.846378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.999 #23 NEW cov: 12021 ft: 13647 corp: 11/120b lim: 35 exec/s: 0 rss: 70Mb L: 13/15 MS: 1 InsertByte- 00:07:54.999 [2024-07-14 21:08:51.896464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.999 [2024-07-14 21:08:51.896494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.259 #24 NEW cov: 12021 ft: 13746 corp: 12/133b lim: 35 exec/s: 0 rss: 70Mb L: 13/15 MS: 1 ChangeBinInt- 00:07:55.259 [2024-07-14 21:08:51.946627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.259 [2024-07-14 21:08:51.946654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.259 #25 NEW cov: 12021 ft: 13764 corp: 13/144b lim: 35 exec/s: 0 rss: 70Mb L: 11/15 MS: 1 ChangeByte- 00:07:55.259 [2024-07-14 21:08:51.986755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.259 [2024-07-14 21:08:51.986781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.259 #26 NEW cov: 12021 ft: 13816 corp: 14/157b lim: 35 exec/s: 0 rss: 70Mb L: 13/15 MS: 1 CrossOver- 00:07:55.259 [2024-07-14 21:08:52.026958] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.259 [2024-07-14 21:08:52.026984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.259 [2024-07-14 21:08:52.027043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.259 [2024-07-14 21:08:52.027057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.259 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:55.259 #27 NEW cov: 12044 ft: 13852 corp: 15/172b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:55.259 [2024-07-14 21:08:52.077104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.259 [2024-07-14 21:08:52.077131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.259 [2024-07-14 21:08:52.077193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.259 [2024-07-14 21:08:52.077207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.259 #28 NEW cov: 12044 ft: 13887 corp: 16/187b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:55.259 [2024-07-14 21:08:52.117113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.259 [2024-07-14 21:08:52.117139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.259 #29 NEW cov: 12044 ft: 13944 corp: 17/200b lim: 35 exec/s: 0 rss: 71Mb L: 13/15 MS: 1 CopyPart- 00:07:55.519 [2024-07-14 21:08:52.167630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.167657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.519 [2024-07-14 21:08:52.167719] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.167733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.519 [2024-07-14 21:08:52.167789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000012a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.167803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.519 [2024-07-14 21:08:52.167865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.167880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.519 #30 NEW cov: 12044 ft: 14497 corp: 18/233b lim: 35 exec/s: 30 rss: 71Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:55.519 [2024-07-14 21:08:52.217508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.217534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.519 [2024-07-14 21:08:52.217596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.217610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.519 #31 NEW cov: 12044 ft: 14543 corp: 19/247b lim: 35 exec/s: 31 rss: 71Mb L: 14/33 MS: 1 InsertByte- 00:07:55.519 [2024-07-14 21:08:52.257733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.257759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.519 [2024-07-14 21:08:52.257823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.257837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.519 [2024-07-14 21:08:52.257894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.257908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.519 #32 NEW cov: 12044 ft: 14704 corp: 20/273b lim: 35 exec/s: 32 rss: 71Mb L: 26/33 MS: 1 InsertRepeatedBytes- 00:07:55.519 [2024-07-14 21:08:52.297578] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.297605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.519 #33 NEW cov: 12044 ft: 14716 corp: 21/285b lim: 35 exec/s: 33 rss: 71Mb L: 12/33 MS: 1 CopyPart- 00:07:55.519 [2024-07-14 21:08:52.337850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.337876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.519 [2024-07-14 21:08:52.337934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.519 [2024-07-14 21:08:52.337949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.519 #34 NEW cov: 12044 ft: 14731 corp: 22/300b lim: 35 exec/s: 34 rss: 71Mb L: 15/33 MS: 1 ChangeBinInt- 00:07:55.519 NEW_FUNC[1/1]: 0x4c6a10 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:07:55.519 #36 NEW cov: 12076 ft: 14856 corp: 23/310b lim: 35 exec/s: 36 rss: 71Mb L: 10/33 MS: 2 EraseBytes-CMP- DE: "\377\377\377\377"- 00:07:55.778 [2024-07-14 21:08:52.427974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.428000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.778 #37 NEW cov: 12076 ft: 14990 corp: 24/322b lim: 35 exec/s: 37 rss: 71Mb L: 12/33 MS: 1 ChangeByte- 00:07:55.778 [2024-07-14 21:08:52.468240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.468266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.778 [2024-07-14 21:08:52.468329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.468344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.778 #38 NEW cov: 12076 ft: 15038 corp: 25/341b lim: 35 exec/s: 38 rss: 71Mb L: 19/33 MS: 1 CopyPart- 00:07:55.778 [2024-07-14 21:08:52.518214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.518240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.778 #39 NEW cov: 12076 ft: 15057 corp: 26/354b lim: 35 exec/s: 39 rss: 71Mb L: 13/33 MS: 1 InsertByte- 00:07:55.778 [2024-07-14 21:08:52.558342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.558368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.778 #40 NEW cov: 12076 ft: 15085 corp: 27/367b lim: 35 exec/s: 40 rss: 71Mb L: 13/33 MS: 1 CMP- DE: "\001\002"- 00:07:55.778 [2024-07-14 21:08:52.608491] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.608515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.778 #41 NEW cov: 12076 ft: 15110 corp: 28/380b lim: 35 exec/s: 41 rss: 71Mb L: 13/33 MS: 1 ShuffleBytes- 00:07:55.778 [2024-07-14 21:08:52.658722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.658748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.778 [2024-07-14 21:08:52.658812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.778 [2024-07-14 21:08:52.658826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.037 #42 NEW cov: 12076 ft: 15115 corp: 29/397b lim: 35 exec/s: 42 rss: 71Mb L: 17/33 MS: 1 InsertRepeatedBytes- 00:07:56.037 [2024-07-14 21:08:52.709175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.709202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.037 [2024-07-14 21:08:52.709263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.709277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.037 [2024-07-14 21:08:52.709338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000012a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.709352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.037 [2024-07-14 21:08:52.709416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.709430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.037 #43 NEW cov: 12076 ft: 15123 corp: 30/430b lim: 35 exec/s: 43 rss: 71Mb L: 33/33 MS: 1 ShuffleBytes- 00:07:56.037 [2024-07-14 21:08:52.759089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.759116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.037 [2024-07-14 21:08:52.759178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.759192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.037 #44 NEW cov: 12076 ft: 15131 corp: 31/444b lim: 35 exec/s: 44 rss: 71Mb L: 14/33 MS: 1 InsertByte- 00:07:56.037 [2024-07-14 21:08:52.809115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.809142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.037 #45 NEW cov: 12076 ft: 15147 corp: 32/456b lim: 35 exec/s: 45 rss: 71Mb L: 12/33 MS: 1 CopyPart- 00:07:56.037 [2024-07-14 21:08:52.849218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.849243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.037 #46 NEW cov: 12076 ft: 15156 corp: 33/468b lim: 35 exec/s: 46 rss: 72Mb L: 12/33 MS: 1 ChangeBinInt- 00:07:56.037 [2024-07-14 21:08:52.889322] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.889347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.037 #47 NEW cov: 12076 ft: 15209 corp: 34/481b lim: 35 exec/s: 47 rss: 72Mb L: 13/33 MS: 1 CMP- DE: "\002\000"- 00:07:56.037 [2024-07-14 21:08:52.929428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.037 [2024-07-14 21:08:52.929459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.296 #48 NEW cov: 12076 ft: 15245 corp: 35/492b lim: 35 exec/s: 48 rss: 72Mb L: 11/33 MS: 1 EraseBytes- 00:07:56.296 [2024-07-14 21:08:52.969653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.296 [2024-07-14 21:08:52.969678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.296 #49 NEW cov: 12076 ft: 15251 corp: 36/506b lim: 35 exec/s: 49 rss: 72Mb L: 14/33 MS: 1 InsertByte- 00:07:56.296 [2024-07-14 21:08:53.009629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000125 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.296 [2024-07-14 21:08:53.009655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.296 #51 NEW cov: 12076 ft: 15272 corp: 37/516b lim: 35 exec/s: 51 rss: 72Mb L: 10/33 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:56.296 [2024-07-14 21:08:53.049717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.296 [2024-07-14 21:08:53.049742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.296 #52 NEW cov: 12076 ft: 15325 corp: 38/527b lim: 35 exec/s: 52 rss: 72Mb L: 11/33 MS: 1 EraseBytes- 00:07:56.296 [2024-07-14 21:08:53.099891] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.296 [2024-07-14 21:08:53.099916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.296 #55 NEW cov: 12076 ft: 15340 corp: 39/535b lim: 35 exec/s: 55 rss: 72Mb L: 8/33 MS: 3 CrossOver-ShuffleBytes-CrossOver- 00:07:56.296 [2024-07-14 21:08:53.140001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.296 [2024-07-14 21:08:53.140027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.296 #56 NEW cov: 12076 ft: 15356 corp: 40/547b lim: 35 exec/s: 28 rss: 72Mb L: 12/33 MS: 1 ShuffleBytes- 00:07:56.296 #56 DONE cov: 12076 ft: 15356 corp: 40/547b lim: 35 exec/s: 28 rss: 72Mb 00:07:56.296 ###### Recommended dictionary. ###### 00:07:56.296 "\377\377\377\377" # Uses: 0 00:07:56.296 "\001\002" # Uses: 0 00:07:56.296 "\002\000" # Uses: 0 00:07:56.297 ###### End of recommended dictionary. ###### 00:07:56.297 Done 56 runs in 2 second(s) 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:56.555 21:08:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:56.555 [2024-07-14 21:08:53.315534] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:56.555 [2024-07-14 21:08:53.315598] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024174 ] 00:07:56.555 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.812 [2024-07-14 21:08:53.490168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.812 [2024-07-14 21:08:53.511721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.812 [2024-07-14 21:08:53.564057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.812 [2024-07-14 21:08:53.580372] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:56.812 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.812 INFO: Seed: 2336963411 00:07:56.812 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:56.812 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:56.812 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:56.812 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.812 #2 INITED exec/s: 0 rss: 63Mb 00:07:56.812 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.812 This may also happen if the target rejected all inputs we tried so far 00:07:56.812 [2024-07-14 21:08:53.646094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357456645743973 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.812 [2024-07-14 21:08:53.646133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.070 NEW_FUNC[1/692]: 0x4aa470 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:57.070 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:57.070 #12 NEW cov: 11904 ft: 11905 corp: 2/24b lim: 105 exec/s: 0 rss: 69Mb L: 23/23 MS: 5 ChangeByte-InsertByte-EraseBytes-ChangeByte-InsertRepeatedBytes- 00:07:57.329 [2024-07-14 21:08:53.986905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357455119017317 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:53.986951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.329 #14 NEW cov: 12034 ft: 12592 corp: 3/48b lim: 105 exec/s: 0 rss: 69Mb L: 24/24 MS: 2 ShuffleBytes-CrossOver- 00:07:57.329 [2024-07-14 21:08:54.026949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3343189783032980837 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.026979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.329 #15 NEW cov: 12040 ft: 12880 corp: 4/73b lim: 105 exec/s: 0 rss: 69Mb L: 25/25 MS: 1 InsertByte- 00:07:57.329 [2024-07-14 21:08:54.077210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357455119017317 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.077235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.329 #16 NEW cov: 12125 ft: 13189 corp: 5/97b lim: 105 exec/s: 0 rss: 69Mb L: 24/25 MS: 1 CMP- DE: "\002\000\000\000"- 00:07:57.329 [2024-07-14 21:08:54.117270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.117298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.329 #17 NEW cov: 12125 ft: 13366 corp: 6/131b lim: 105 exec/s: 0 rss: 69Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:57.329 [2024-07-14 21:08:54.157375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3343189783032980837 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.157406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.329 #18 NEW cov: 12125 ft: 13454 corp: 7/156b lim: 105 exec/s: 0 rss: 70Mb L: 25/34 MS: 1 ShuffleBytes- 00:07:57.329 [2024-07-14 21:08:54.208111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069582487552 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.208142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.329 [2024-07-14 21:08:54.208228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.208253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.329 [2024-07-14 21:08:54.208379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.208401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.329 [2024-07-14 21:08:54.208527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.329 [2024-07-14 21:08:54.208548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.588 #23 NEW cov: 12125 ft: 14190 corp: 8/246b lim: 105 exec/s: 0 rss: 70Mb L: 90/90 MS: 5 EraseBytes-EraseBytes-PersAutoDict-PersAutoDict-InsertRepeatedBytes- DE: "\002\000\000\000"-"\002\000\000\000"- 00:07:57.588 [2024-07-14 21:08:54.247612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.588 [2024-07-14 21:08:54.247636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.588 #24 NEW cov: 12125 ft: 14307 corp: 9/283b lim: 105 exec/s: 0 rss: 70Mb L: 37/90 MS: 1 CopyPart- 00:07:57.588 [2024-07-14 21:08:54.297797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7290876927538783589 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.588 [2024-07-14 21:08:54.297830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.588 #28 NEW cov: 12125 ft: 14348 corp: 10/310b lim: 105 exec/s: 0 rss: 70Mb L: 27/90 MS: 4 InsertByte-ChangeBinInt-ShuffleBytes-CrossOver- 00:07:57.588 [2024-07-14 21:08:54.337926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.588 [2024-07-14 21:08:54.337959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.588 #29 NEW cov: 12125 ft: 14374 corp: 11/344b lim: 105 exec/s: 0 rss: 70Mb L: 34/90 MS: 1 ShuffleBytes- 00:07:57.588 [2024-07-14 21:08:54.378058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7290876927538783589 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.588 [2024-07-14 21:08:54.378086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.588 #30 NEW cov: 12125 ft: 14399 corp: 12/371b lim: 105 exec/s: 0 rss: 70Mb L: 27/90 MS: 1 ShuffleBytes- 00:07:57.588 [2024-07-14 21:08:54.427934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7290876927538783589 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.588 [2024-07-14 21:08:54.427960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.588 [2024-07-14 21:08:54.428088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7306357456654787941 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.588 [2024-07-14 21:08:54.428109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.588 #31 NEW cov: 12125 ft: 14727 corp: 13/414b lim: 105 exec/s: 0 rss: 70Mb L: 43/90 MS: 1 CopyPart- 00:07:57.588 [2024-07-14 21:08:54.478366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7290876927538783589 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.588 [2024-07-14 21:08:54.478392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.847 #32 NEW cov: 12125 ft: 14737 corp: 14/441b lim: 105 exec/s: 0 rss: 70Mb L: 27/90 MS: 1 ChangeByte- 00:07:57.847 [2024-07-14 21:08:54.518502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3343189783032963941 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.847 [2024-07-14 21:08:54.518528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.847 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:57.847 #33 NEW cov: 12148 ft: 14785 corp: 15/466b lim: 105 exec/s: 0 rss: 70Mb L: 25/90 MS: 1 ChangeByte- 00:07:57.847 [2024-07-14 21:08:54.568561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357456645744101 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.847 [2024-07-14 21:08:54.568594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.847 #34 NEW cov: 12148 ft: 14792 corp: 16/489b lim: 105 exec/s: 0 rss: 70Mb L: 23/90 MS: 1 ChangeBit- 00:07:57.847 [2024-07-14 21:08:54.618708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357455119017317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.847 [2024-07-14 21:08:54.618741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.847 #36 NEW cov: 12148 ft: 14804 corp: 17/524b lim: 105 exec/s: 36 rss: 70Mb L: 35/90 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:57.847 [2024-07-14 21:08:54.659047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7290876927538783589 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.847 [2024-07-14 21:08:54.659076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.847 [2024-07-14 21:08:54.659190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7306357456654787950 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.847 [2024-07-14 21:08:54.659216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.847 #37 NEW cov: 12148 ft: 14889 corp: 18/567b lim: 105 exec/s: 37 rss: 70Mb L: 43/90 MS: 1 ChangeByte- 00:07:57.847 [2024-07-14 21:08:54.708996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357455119017317 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.847 [2024-07-14 21:08:54.709029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.847 #38 NEW cov: 12148 ft: 14903 corp: 19/591b lim: 105 exec/s: 38 rss: 70Mb L: 24/90 MS: 1 ChangeByte- 00:07:58.105 [2024-07-14 21:08:54.759126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357455119017317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.105 [2024-07-14 21:08:54.759150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.105 #39 NEW cov: 12148 ft: 14924 corp: 20/626b lim: 105 exec/s: 39 rss: 70Mb L: 35/90 MS: 1 ChangeByte- 00:07:58.106 [2024-07-14 21:08:54.809439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7290876927538783589 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.809483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.106 [2024-07-14 21:08:54.809581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7954875802064383845 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.809607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.106 #40 NEW cov: 12148 ft: 14960 corp: 21/670b lim: 105 exec/s: 40 rss: 70Mb L: 44/90 MS: 1 InsertByte- 00:07:58.106 [2024-07-14 21:08:54.859985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069582487552 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.860016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.106 [2024-07-14 21:08:54.860131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.860155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.106 [2024-07-14 21:08:54.860281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.860305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.106 [2024-07-14 21:08:54.860435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.860458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.106 #41 NEW cov: 12148 ft: 15034 corp: 22/760b lim: 105 exec/s: 41 rss: 70Mb L: 90/90 MS: 1 ChangeBit- 00:07:58.106 [2024-07-14 21:08:54.910012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3343189783032963941 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.910042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.106 [2024-07-14 21:08:54.910157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2893606913523066920 len:10281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.910180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.106 [2024-07-14 21:08:54.910308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2893606913523066920 len:10281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.910330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.106 #42 NEW cov: 12148 ft: 15383 corp: 23/833b lim: 105 exec/s: 42 rss: 70Mb L: 73/90 MS: 1 InsertRepeatedBytes- 00:07:58.106 [2024-07-14 21:08:54.959395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.106 [2024-07-14 21:08:54.959420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.106 #43 NEW cov: 12148 ft: 15399 corp: 24/867b lim: 105 exec/s: 43 rss: 70Mb L: 34/90 MS: 1 ChangeBinInt- 00:07:58.365 [2024-07-14 21:08:55.009811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357456645743973 len:25961 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.009843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.365 #44 NEW cov: 12148 ft: 15436 corp: 25/890b lim: 105 exec/s: 44 rss: 70Mb L: 23/90 MS: 1 ChangeBinInt- 00:07:58.365 [2024-07-14 21:08:55.050181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7290876927538783589 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.050216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.365 [2024-07-14 21:08:55.050346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7954875802064383845 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.050372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.365 #45 NEW cov: 12148 ft: 15453 corp: 26/934b lim: 105 exec/s: 45 rss: 71Mb L: 44/90 MS: 1 ChangeByte- 00:07:58.365 [2024-07-14 21:08:55.099910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357456645743973 len:25961 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.099936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.365 #51 NEW cov: 12148 ft: 15455 corp: 27/957b lim: 105 exec/s: 51 rss: 71Mb L: 23/90 MS: 1 ChangeByte- 00:07:58.365 [2024-07-14 21:08:55.149823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17744172597428811510 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.149857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.365 #52 NEW cov: 12148 ft: 15458 corp: 28/994b lim: 105 exec/s: 52 rss: 71Mb L: 37/90 MS: 1 ChangeByte- 00:07:58.365 [2024-07-14 21:08:55.210993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17744172597428811510 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.211025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.365 [2024-07-14 21:08:55.211111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17795682518166861558 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.211133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.365 [2024-07-14 21:08:55.211244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.211265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.365 [2024-07-14 21:08:55.211382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.365 [2024-07-14 21:08:55.211403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.365 #53 NEW cov: 12148 ft: 15467 corp: 29/1088b lim: 105 exec/s: 53 rss: 71Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:07:58.624 [2024-07-14 21:08:55.271289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069582487552 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.271319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.624 [2024-07-14 21:08:55.271411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.271430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.624 [2024-07-14 21:08:55.271557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.271576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.624 [2024-07-14 21:08:55.271708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.271730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.624 #54 NEW cov: 12148 ft: 15483 corp: 30/1178b lim: 105 exec/s: 54 rss: 71Mb L: 90/94 MS: 1 ChangeByte- 00:07:58.624 [2024-07-14 21:08:55.310770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357455119017317 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.310803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.624 #55 NEW cov: 12148 ft: 15494 corp: 31/1202b lim: 105 exec/s: 55 rss: 71Mb L: 24/94 MS: 1 CopyPart- 00:07:58.624 [2024-07-14 21:08:55.350855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357456645743973 len:25961 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.350888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.624 #56 NEW cov: 12148 ft: 15508 corp: 32/1225b lim: 105 exec/s: 56 rss: 71Mb L: 23/94 MS: 1 ChangeBit- 00:07:58.624 [2024-07-14 21:08:55.401147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069582487552 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.401179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.624 [2024-07-14 21:08:55.401297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.401321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.624 #57 NEW cov: 12148 ft: 15521 corp: 33/1284b lim: 105 exec/s: 57 rss: 71Mb L: 59/94 MS: 1 EraseBytes- 00:07:58.624 [2024-07-14 21:08:55.451135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357456645744101 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.451165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.624 #58 NEW cov: 12148 ft: 15528 corp: 34/1307b lim: 105 exec/s: 58 rss: 71Mb L: 23/94 MS: 1 CopyPart- 00:07:58.624 [2024-07-14 21:08:55.501493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357455119017317 len:20 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.501523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.624 [2024-07-14 21:08:55.501664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.624 [2024-07-14 21:08:55.501687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.883 #59 NEW cov: 12148 ft: 15551 corp: 35/1350b lim: 105 exec/s: 59 rss: 72Mb L: 43/94 MS: 1 CMP- DE: "\023x\202\277\\\202*\000"- 00:07:58.883 [2024-07-14 21:08:55.551490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.883 [2024-07-14 21:08:55.551520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.883 #60 NEW cov: 12148 ft: 15555 corp: 36/1372b lim: 105 exec/s: 60 rss: 72Mb L: 22/94 MS: 1 EraseBytes- 00:07:58.883 [2024-07-14 21:08:55.591490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.883 [2024-07-14 21:08:55.591519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.883 #61 NEW cov: 12148 ft: 15591 corp: 37/1407b lim: 105 exec/s: 61 rss: 72Mb L: 35/94 MS: 1 InsertByte- 00:07:58.883 [2024-07-14 21:08:55.631713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7306357022854047205 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.883 [2024-07-14 21:08:55.631744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.883 #62 NEW cov: 12148 ft: 15611 corp: 38/1431b lim: 105 exec/s: 31 rss: 72Mb L: 24/94 MS: 1 InsertByte- 00:07:58.883 #62 DONE cov: 12148 ft: 15611 corp: 38/1431b lim: 105 exec/s: 31 rss: 72Mb 00:07:58.883 ###### Recommended dictionary. ###### 00:07:58.883 "\002\000\000\000" # Uses: 4 00:07:58.883 "\023x\202\277\\\202*\000" # Uses: 0 00:07:58.883 ###### End of recommended dictionary. ###### 00:07:58.883 Done 62 runs in 2 second(s) 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:58.883 21:08:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:58.883 [2024-07-14 21:08:55.783414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:58.883 [2024-07-14 21:08:55.783470] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024703 ] 00:07:59.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.142 [2024-07-14 21:08:55.954873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.142 [2024-07-14 21:08:55.976188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.142 [2024-07-14 21:08:56.028296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.143 [2024-07-14 21:08:56.044595] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:59.401 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.401 INFO: Seed: 504000829 00:07:59.401 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:59.401 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:59.401 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:59.401 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.401 #2 INITED exec/s: 0 rss: 63Mb 00:07:59.401 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.401 This may also happen if the target rejected all inputs we tried so far 00:07:59.401 [2024-07-14 21:08:56.092727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.401 [2024-07-14 21:08:56.092757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.659 NEW_FUNC[1/693]: 0x4ad7f0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:59.659 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.659 #10 NEW cov: 11925 ft: 11926 corp: 2/35b lim: 120 exec/s: 0 rss: 69Mb L: 34/34 MS: 3 InsertRepeatedBytes-ChangeBinInt-CopyPart- 00:07:59.659 [2024-07-14 21:08:56.403619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029813933544166 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.660 [2024-07-14 21:08:56.403653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.660 #16 NEW cov: 12055 ft: 12512 corp: 3/70b lim: 120 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 InsertByte- 00:07:59.660 [2024-07-14 21:08:56.453661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884983314 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.660 [2024-07-14 21:08:56.453690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.660 #17 NEW cov: 12061 ft: 12894 corp: 4/104b lim: 120 exec/s: 0 rss: 69Mb L: 34/35 MS: 1 ChangeBit- 00:07:59.660 [2024-07-14 21:08:56.493755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884983314 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.660 [2024-07-14 21:08:56.493784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.660 #18 NEW cov: 12146 ft: 13162 corp: 5/133b lim: 120 exec/s: 0 rss: 69Mb L: 29/35 MS: 1 EraseBytes- 00:07:59.660 [2024-07-14 21:08:56.543921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.660 [2024-07-14 21:08:56.543950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.918 #19 NEW cov: 12146 ft: 13199 corp: 6/167b lim: 120 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 ChangeByte- 00:07:59.918 [2024-07-14 21:08:56.584055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638238761042503186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.584085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.918 #20 NEW cov: 12146 ft: 13253 corp: 7/201b lim: 120 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:59.918 [2024-07-14 21:08:56.624152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029812844259046 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.624179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.918 #21 NEW cov: 12146 ft: 13317 corp: 8/235b lim: 120 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:59.918 [2024-07-14 21:08:56.674607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9693583158213747846 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.674635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.918 [2024-07-14 21:08:56.674679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9693583160302274182 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.674694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.918 [2024-07-14 21:08:56.674754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9693583160302274182 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.674771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.918 #25 NEW cov: 12146 ft: 14186 corp: 9/312b lim: 120 exec/s: 0 rss: 70Mb L: 77/77 MS: 4 ShuffleBytes-InsertByte-CopyPart-InsertRepeatedBytes- 00:07:59.918 [2024-07-14 21:08:56.714399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638238838351916562 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.714429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.918 #26 NEW cov: 12146 ft: 14232 corp: 10/341b lim: 120 exec/s: 0 rss: 70Mb L: 29/77 MS: 1 CrossOver- 00:07:59.918 [2024-07-14 21:08:56.764524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.764552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.918 #27 NEW cov: 12146 ft: 14330 corp: 11/375b lim: 120 exec/s: 0 rss: 70Mb L: 34/77 MS: 1 CopyPart- 00:07:59.918 [2024-07-14 21:08:56.804651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029813933544166 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.918 [2024-07-14 21:08:56.804680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.177 #28 NEW cov: 12146 ft: 14350 corp: 12/410b lim: 120 exec/s: 0 rss: 70Mb L: 35/77 MS: 1 CrossOver- 00:08:00.177 [2024-07-14 21:08:56.854815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638238838351916562 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:56.854843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.177 #29 NEW cov: 12146 ft: 14371 corp: 13/439b lim: 120 exec/s: 0 rss: 70Mb L: 29/77 MS: 1 ShuffleBytes- 00:08:00.177 [2024-07-14 21:08:56.904948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:317122278 len:35 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:56.904976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.177 #30 NEW cov: 12146 ft: 14386 corp: 14/473b lim: 120 exec/s: 0 rss: 70Mb L: 34/77 MS: 1 CopyPart- 00:08:00.177 [2024-07-14 21:08:56.955048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239010150608402 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:56.955077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.177 #31 NEW cov: 12146 ft: 14416 corp: 15/502b lim: 120 exec/s: 0 rss: 70Mb L: 29/77 MS: 1 ChangeByte- 00:08:00.177 [2024-07-14 21:08:56.995184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:56.995213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.177 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:00.177 #32 NEW cov: 12169 ft: 14561 corp: 16/536b lim: 120 exec/s: 0 rss: 70Mb L: 34/77 MS: 1 ChangeBit- 00:08:00.177 [2024-07-14 21:08:57.035603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029813933544166 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:57.035631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.177 [2024-07-14 21:08:57.035677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16638239752757634790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:57.035695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.177 [2024-07-14 21:08:57.035750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:57.035766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.177 #33 NEW cov: 12169 ft: 14596 corp: 17/608b lim: 120 exec/s: 0 rss: 70Mb L: 72/77 MS: 1 InsertRepeatedBytes- 00:08:00.177 [2024-07-14 21:08:57.075428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.177 [2024-07-14 21:08:57.075464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.436 #34 NEW cov: 12169 ft: 14645 corp: 18/642b lim: 120 exec/s: 34 rss: 70Mb L: 34/77 MS: 1 ChangeBinInt- 00:08:00.436 [2024-07-14 21:08:57.125567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.125594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.436 #35 NEW cov: 12169 ft: 14657 corp: 19/689b lim: 120 exec/s: 35 rss: 70Mb L: 47/77 MS: 1 InsertRepeatedBytes- 00:08:00.436 [2024-07-14 21:08:57.165980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029813933544166 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.166007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.436 [2024-07-14 21:08:57.166047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16638239752757634790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.166063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.436 [2024-07-14 21:08:57.166118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.166136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.436 #36 NEW cov: 12169 ft: 14687 corp: 20/761b lim: 120 exec/s: 36 rss: 70Mb L: 72/77 MS: 1 ChangeBit- 00:08:00.436 [2024-07-14 21:08:57.216290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.216319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.436 [2024-07-14 21:08:57.216359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16638267347922511590 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.216375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.436 [2024-07-14 21:08:57.216431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.216452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.436 [2024-07-14 21:08:57.216508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.436 [2024-07-14 21:08:57.216528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.437 #37 NEW cov: 12169 ft: 15089 corp: 21/877b lim: 120 exec/s: 37 rss: 70Mb L: 116/116 MS: 1 InsertRepeatedBytes- 00:08:00.437 [2024-07-14 21:08:57.265982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:317122278 len:35 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.437 [2024-07-14 21:08:57.266011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.437 #38 NEW cov: 12169 ft: 15174 corp: 22/911b lim: 120 exec/s: 38 rss: 70Mb L: 34/116 MS: 1 ChangeBit- 00:08:00.437 [2024-07-14 21:08:57.316582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11357407133111786909 len:40350 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.437 [2024-07-14 21:08:57.316610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.437 [2024-07-14 21:08:57.316654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11357407135578037661 len:40350 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.437 [2024-07-14 21:08:57.316670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.437 [2024-07-14 21:08:57.316724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11357407135578037661 len:40350 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.437 [2024-07-14 21:08:57.316742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.437 [2024-07-14 21:08:57.316796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11357407135578037661 len:40350 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.437 [2024-07-14 21:08:57.316813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.695 #39 NEW cov: 12169 ft: 15186 corp: 23/1027b lim: 120 exec/s: 39 rss: 70Mb L: 116/116 MS: 1 InsertRepeatedBytes- 00:08:00.695 [2024-07-14 21:08:57.356198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029812844259046 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.695 [2024-07-14 21:08:57.356226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.695 #41 NEW cov: 12169 ft: 15303 corp: 24/1063b lim: 120 exec/s: 41 rss: 70Mb L: 36/116 MS: 2 EraseBytes-CopyPart- 00:08:00.695 [2024-07-14 21:08:57.406371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029813933544166 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.695 [2024-07-14 21:08:57.406398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.695 #42 NEW cov: 12169 ft: 15311 corp: 25/1098b lim: 120 exec/s: 42 rss: 70Mb L: 35/116 MS: 1 ChangeBinInt- 00:08:00.695 [2024-07-14 21:08:57.446468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.695 [2024-07-14 21:08:57.446496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.695 #43 NEW cov: 12169 ft: 15337 corp: 26/1132b lim: 120 exec/s: 43 rss: 70Mb L: 34/116 MS: 1 ChangeBit- 00:08:00.695 [2024-07-14 21:08:57.486589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1362029812844259046 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.695 [2024-07-14 21:08:57.486617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.695 #44 NEW cov: 12169 ft: 15401 corp: 27/1162b lim: 120 exec/s: 44 rss: 70Mb L: 30/116 MS: 1 EraseBytes- 00:08:00.695 [2024-07-14 21:08:57.536769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.695 [2024-07-14 21:08:57.536799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.695 #45 NEW cov: 12169 ft: 15410 corp: 28/1207b lim: 120 exec/s: 45 rss: 70Mb L: 45/116 MS: 1 CrossOver- 00:08:00.696 [2024-07-14 21:08:57.576842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638238838351916562 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.696 [2024-07-14 21:08:57.576871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.696 #46 NEW cov: 12169 ft: 15447 corp: 29/1237b lim: 120 exec/s: 46 rss: 70Mb L: 30/116 MS: 1 InsertByte- 00:08:00.954 [2024-07-14 21:08:57.617454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884983314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.617483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.954 [2024-07-14 21:08:57.617533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.617549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.954 [2024-07-14 21:08:57.617603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.617618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.954 [2024-07-14 21:08:57.617673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.617688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.954 #47 NEW cov: 12169 ft: 15458 corp: 30/1342b lim: 120 exec/s: 47 rss: 70Mb L: 105/116 MS: 1 InsertRepeatedBytes- 00:08:00.954 [2024-07-14 21:08:57.657250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.657277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.954 [2024-07-14 21:08:57.657316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:868082074056920076 len:3085 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.657333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.954 #48 NEW cov: 12169 ft: 15760 corp: 31/1404b lim: 120 exec/s: 48 rss: 70Mb L: 62/116 MS: 1 InsertRepeatedBytes- 00:08:00.954 [2024-07-14 21:08:57.697691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.697719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.954 [2024-07-14 21:08:57.697756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.697773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.954 [2024-07-14 21:08:57.697828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.954 [2024-07-14 21:08:57.697845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.954 #49 NEW cov: 12169 ft: 15770 corp: 32/1499b lim: 120 exec/s: 49 rss: 70Mb L: 95/116 MS: 1 InsertRepeatedBytes- 00:08:00.954 [2024-07-14 21:08:57.737741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.737768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.737812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.737827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.737885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.737900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.955 #50 NEW cov: 12169 ft: 15772 corp: 33/1594b lim: 120 exec/s: 50 rss: 70Mb L: 95/116 MS: 1 ShuffleBytes- 00:08:00.955 [2024-07-14 21:08:57.788093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.788121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.788170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16638267347922511590 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.788188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.788244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.788260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.788317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.788335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.955 #51 NEW cov: 12169 ft: 15786 corp: 34/1713b lim: 120 exec/s: 51 rss: 71Mb L: 119/119 MS: 1 CopyPart- 00:08:00.955 [2024-07-14 21:08:57.838244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.838271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.838326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16638267347922511590 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.838344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.838398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.838414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.955 [2024-07-14 21:08:57.838476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:34439 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.955 [2024-07-14 21:08:57.838492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.214 #52 NEW cov: 12169 ft: 15787 corp: 35/1832b lim: 120 exec/s: 52 rss: 71Mb L: 119/119 MS: 1 CrossOver- 00:08:01.214 [2024-07-14 21:08:57.888079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.888108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.214 [2024-07-14 21:08:57.888163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.888180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.214 #53 NEW cov: 12169 ft: 15814 corp: 36/1890b lim: 120 exec/s: 53 rss: 71Mb L: 58/119 MS: 1 InsertRepeatedBytes- 00:08:01.214 [2024-07-14 21:08:57.928622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239748884981266 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.928650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.214 [2024-07-14 21:08:57.928705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16638267347922511590 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.928721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.214 [2024-07-14 21:08:57.928776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.928792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.214 [2024-07-14 21:08:57.928847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.928864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.214 [2024-07-14 21:08:57.928918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:1362029441589415823 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.928936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:01.214 #54 NEW cov: 12169 ft: 15853 corp: 37/2010b lim: 120 exec/s: 54 rss: 71Mb L: 120/120 MS: 1 InsertByte- 00:08:01.214 [2024-07-14 21:08:57.968087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638239010150608402 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:57.968116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.214 #55 NEW cov: 12169 ft: 15894 corp: 38/2039b lim: 120 exec/s: 55 rss: 71Mb L: 29/120 MS: 1 ChangeByte- 00:08:01.214 [2024-07-14 21:08:58.018234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638238838351916562 len:59111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:58.018262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.214 #61 NEW cov: 12169 ft: 15957 corp: 39/2069b lim: 120 exec/s: 61 rss: 71Mb L: 30/120 MS: 1 ChangeBit- 00:08:01.214 [2024-07-14 21:08:58.068718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16638267344049858066 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:58.068746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.214 [2024-07-14 21:08:58.068796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:58.068813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.214 [2024-07-14 21:08:58.068866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.214 [2024-07-14 21:08:58.068883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.214 #62 NEW cov: 12169 ft: 15962 corp: 40/2158b lim: 120 exec/s: 31 rss: 71Mb L: 89/120 MS: 1 EraseBytes- 00:08:01.214 #62 DONE cov: 12169 ft: 15962 corp: 40/2158b lim: 120 exec/s: 31 rss: 71Mb 00:08:01.214 Done 62 runs in 2 second(s) 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.474 21:08:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:01.474 [2024-07-14 21:08:58.239501] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:01.474 [2024-07-14 21:08:58.239571] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025108 ] 00:08:01.474 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.734 [2024-07-14 21:08:58.421024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.734 [2024-07-14 21:08:58.443086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.734 [2024-07-14 21:08:58.495352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.734 [2024-07-14 21:08:58.511678] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:01.734 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.734 INFO: Seed: 2970993515 00:08:01.734 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:01.734 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:01.734 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:01.734 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.734 #2 INITED exec/s: 0 rss: 63Mb 00:08:01.734 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.734 This may also happen if the target rejected all inputs we tried so far 00:08:01.734 [2024-07-14 21:08:58.559716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:01.734 [2024-07-14 21:08:58.559745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.993 NEW_FUNC[1/691]: 0x4b10e0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:01.993 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:01.993 #22 NEW cov: 11868 ft: 11866 corp: 2/26b lim: 100 exec/s: 0 rss: 70Mb L: 25/25 MS: 5 ChangeBit-CrossOver-CrossOver-CopyPart-InsertRepeatedBytes- 00:08:01.993 [2024-07-14 21:08:58.870494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:01.993 [2024-07-14 21:08:58.870534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.252 #23 NEW cov: 11998 ft: 12407 corp: 3/56b lim: 100 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CopyPart- 00:08:02.252 [2024-07-14 21:08:58.920517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.252 [2024-07-14 21:08:58.920544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.252 #24 NEW cov: 12004 ft: 12704 corp: 4/78b lim: 100 exec/s: 0 rss: 70Mb L: 22/30 MS: 1 EraseBytes- 00:08:02.252 [2024-07-14 21:08:58.960625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.252 [2024-07-14 21:08:58.960651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.252 #25 NEW cov: 12089 ft: 12901 corp: 5/100b lim: 100 exec/s: 0 rss: 70Mb L: 22/30 MS: 1 CopyPart- 00:08:02.252 [2024-07-14 21:08:59.010773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.252 [2024-07-14 21:08:59.010798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.252 #26 NEW cov: 12089 ft: 13065 corp: 6/125b lim: 100 exec/s: 0 rss: 70Mb L: 25/30 MS: 1 CopyPart- 00:08:02.252 [2024-07-14 21:08:59.060883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.252 [2024-07-14 21:08:59.060908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.252 #32 NEW cov: 12089 ft: 13164 corp: 7/150b lim: 100 exec/s: 0 rss: 70Mb L: 25/30 MS: 1 ShuffleBytes- 00:08:02.252 [2024-07-14 21:08:59.111059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.252 [2024-07-14 21:08:59.111085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.252 #34 NEW cov: 12089 ft: 13220 corp: 8/189b lim: 100 exec/s: 0 rss: 70Mb L: 39/39 MS: 2 EraseBytes-CrossOver- 00:08:02.252 [2024-07-14 21:08:59.151174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.252 [2024-07-14 21:08:59.151201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.511 #35 NEW cov: 12089 ft: 13265 corp: 9/211b lim: 100 exec/s: 0 rss: 70Mb L: 22/39 MS: 1 ShuffleBytes- 00:08:02.511 [2024-07-14 21:08:59.191264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.511 [2024-07-14 21:08:59.191293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.511 #36 NEW cov: 12089 ft: 13310 corp: 10/250b lim: 100 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 ChangeBit- 00:08:02.511 [2024-07-14 21:08:59.241440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.511 [2024-07-14 21:08:59.241473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.511 #37 NEW cov: 12089 ft: 13356 corp: 11/275b lim: 100 exec/s: 0 rss: 70Mb L: 25/39 MS: 1 ShuffleBytes- 00:08:02.511 [2024-07-14 21:08:59.291532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.511 [2024-07-14 21:08:59.291558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.511 #38 NEW cov: 12089 ft: 13366 corp: 12/314b lim: 100 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 CMP- DE: "\001\003"- 00:08:02.511 [2024-07-14 21:08:59.331770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.511 [2024-07-14 21:08:59.331795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.511 [2024-07-14 21:08:59.331830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.511 [2024-07-14 21:08:59.331846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.511 #39 NEW cov: 12089 ft: 13684 corp: 13/354b lim: 100 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 InsertByte- 00:08:02.511 [2024-07-14 21:08:59.371798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.511 [2024-07-14 21:08:59.371825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.511 #40 NEW cov: 12089 ft: 13724 corp: 14/393b lim: 100 exec/s: 0 rss: 70Mb L: 39/40 MS: 1 ChangeByte- 00:08:02.770 [2024-07-14 21:08:59.421906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.770 [2024-07-14 21:08:59.421933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.770 #41 NEW cov: 12089 ft: 13854 corp: 15/418b lim: 100 exec/s: 0 rss: 70Mb L: 25/40 MS: 1 ChangeByte- 00:08:02.770 [2024-07-14 21:08:59.462110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.770 [2024-07-14 21:08:59.462135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.770 [2024-07-14 21:08:59.462171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:02.770 [2024-07-14 21:08:59.462186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.770 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:02.770 #42 NEW cov: 12112 ft: 13897 corp: 16/458b lim: 100 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:02.770 [2024-07-14 21:08:59.512168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.770 [2024-07-14 21:08:59.512197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.770 #43 NEW cov: 12112 ft: 13945 corp: 17/483b lim: 100 exec/s: 0 rss: 70Mb L: 25/40 MS: 1 ChangeBinInt- 00:08:02.770 [2024-07-14 21:08:59.552278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.770 [2024-07-14 21:08:59.552304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.770 #44 NEW cov: 12112 ft: 13948 corp: 18/516b lim: 100 exec/s: 44 rss: 70Mb L: 33/40 MS: 1 CMP- DE: "\20320\031_\202*\000"- 00:08:02.770 [2024-07-14 21:08:59.592423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.770 [2024-07-14 21:08:59.592454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.770 #45 NEW cov: 12112 ft: 13971 corp: 19/538b lim: 100 exec/s: 45 rss: 71Mb L: 22/40 MS: 1 ChangeByte- 00:08:02.770 [2024-07-14 21:08:59.642553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:02.770 [2024-07-14 21:08:59.642580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.029 #46 NEW cov: 12112 ft: 13988 corp: 20/568b lim: 100 exec/s: 46 rss: 71Mb L: 30/40 MS: 1 ChangeBit- 00:08:03.029 [2024-07-14 21:08:59.692804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.029 [2024-07-14 21:08:59.692829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.029 [2024-07-14 21:08:59.692866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.029 [2024-07-14 21:08:59.692880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.029 #47 NEW cov: 12112 ft: 13996 corp: 21/608b lim: 100 exec/s: 47 rss: 71Mb L: 40/40 MS: 1 InsertByte- 00:08:03.029 [2024-07-14 21:08:59.732805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.029 [2024-07-14 21:08:59.732831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.029 #48 NEW cov: 12112 ft: 14032 corp: 22/633b lim: 100 exec/s: 48 rss: 71Mb L: 25/40 MS: 1 ChangeByte- 00:08:03.029 [2024-07-14 21:08:59.782954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.029 [2024-07-14 21:08:59.782980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.029 #49 NEW cov: 12112 ft: 14042 corp: 23/662b lim: 100 exec/s: 49 rss: 71Mb L: 29/40 MS: 1 CMP- DE: "\377\377\000\031"- 00:08:03.029 [2024-07-14 21:08:59.833087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.029 [2024-07-14 21:08:59.833114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.029 #50 NEW cov: 12112 ft: 14106 corp: 24/701b lim: 100 exec/s: 50 rss: 71Mb L: 39/40 MS: 1 ChangeBit- 00:08:03.030 [2024-07-14 21:08:59.883234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.030 [2024-07-14 21:08:59.883260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.030 #51 NEW cov: 12112 ft: 14136 corp: 25/735b lim: 100 exec/s: 51 rss: 71Mb L: 34/40 MS: 1 PersAutoDict- DE: "\377\377\000\031"- 00:08:03.289 [2024-07-14 21:08:59.933378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.289 [2024-07-14 21:08:59.933406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.289 #52 NEW cov: 12112 ft: 14146 corp: 26/774b lim: 100 exec/s: 52 rss: 71Mb L: 39/40 MS: 1 ChangeBinInt- 00:08:03.289 [2024-07-14 21:08:59.983852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.289 [2024-07-14 21:08:59.983879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.289 [2024-07-14 21:08:59.983924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.289 [2024-07-14 21:08:59.983938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.289 [2024-07-14 21:08:59.983989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.289 [2024-07-14 21:08:59.984006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.289 [2024-07-14 21:08:59.984058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:03.289 [2024-07-14 21:08:59.984072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.289 #53 NEW cov: 12112 ft: 14535 corp: 27/866b lim: 100 exec/s: 53 rss: 71Mb L: 92/92 MS: 1 InsertRepeatedBytes- 00:08:03.289 [2024-07-14 21:09:00.023775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.289 [2024-07-14 21:09:00.023805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.289 [2024-07-14 21:09:00.023858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.289 [2024-07-14 21:09:00.023874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.289 #54 NEW cov: 12112 ft: 14581 corp: 28/906b lim: 100 exec/s: 54 rss: 72Mb L: 40/92 MS: 1 CopyPart- 00:08:03.289 [2024-07-14 21:09:00.073986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.289 [2024-07-14 21:09:00.074012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.289 [2024-07-14 21:09:00.074045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.289 [2024-07-14 21:09:00.074058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.289 [2024-07-14 21:09:00.074107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:03.289 [2024-07-14 21:09:00.074121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.289 #55 NEW cov: 12112 ft: 14825 corp: 29/968b lim: 100 exec/s: 55 rss: 72Mb L: 62/92 MS: 1 InsertRepeatedBytes- 00:08:03.289 [2024-07-14 21:09:00.123943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.289 [2024-07-14 21:09:00.123970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.289 #56 NEW cov: 12112 ft: 14892 corp: 30/991b lim: 100 exec/s: 56 rss: 72Mb L: 23/92 MS: 1 InsertByte- 00:08:03.289 [2024-07-14 21:09:00.164125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.289 [2024-07-14 21:09:00.164151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.289 [2024-07-14 21:09:00.164197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.289 [2024-07-14 21:09:00.164211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.289 #57 NEW cov: 12112 ft: 14904 corp: 31/1034b lim: 100 exec/s: 57 rss: 72Mb L: 43/92 MS: 1 CopyPart- 00:08:03.548 [2024-07-14 21:09:00.204131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.548 [2024-07-14 21:09:00.204160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.548 #58 NEW cov: 12112 ft: 14946 corp: 32/1073b lim: 100 exec/s: 58 rss: 72Mb L: 39/92 MS: 1 ChangeBinInt- 00:08:03.548 [2024-07-14 21:09:00.254325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.548 [2024-07-14 21:09:00.254352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.548 #59 NEW cov: 12112 ft: 15003 corp: 33/1100b lim: 100 exec/s: 59 rss: 72Mb L: 27/92 MS: 1 PersAutoDict- DE: "\001\003"- 00:08:03.548 [2024-07-14 21:09:00.294412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.548 [2024-07-14 21:09:00.294439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.548 #60 NEW cov: 12112 ft: 15023 corp: 34/1137b lim: 100 exec/s: 60 rss: 72Mb L: 37/92 MS: 1 CopyPart- 00:08:03.548 [2024-07-14 21:09:00.344591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.548 [2024-07-14 21:09:00.344617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.548 #61 NEW cov: 12112 ft: 15028 corp: 35/1162b lim: 100 exec/s: 61 rss: 72Mb L: 25/92 MS: 1 ChangeBinInt- 00:08:03.548 [2024-07-14 21:09:00.384651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.548 [2024-07-14 21:09:00.384677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.548 #62 NEW cov: 12112 ft: 15036 corp: 36/1199b lim: 100 exec/s: 62 rss: 72Mb L: 37/92 MS: 1 ChangeBit- 00:08:03.548 [2024-07-14 21:09:00.434945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.548 [2024-07-14 21:09:00.434970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.548 [2024-07-14 21:09:00.435003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.548 [2024-07-14 21:09:00.435020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.807 #63 NEW cov: 12112 ft: 15079 corp: 37/1239b lim: 100 exec/s: 63 rss: 72Mb L: 40/92 MS: 1 InsertByte- 00:08:03.807 [2024-07-14 21:09:00.474935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.807 [2024-07-14 21:09:00.474963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.807 #64 NEW cov: 12112 ft: 15089 corp: 38/1275b lim: 100 exec/s: 64 rss: 72Mb L: 36/92 MS: 1 EraseBytes- 00:08:03.808 [2024-07-14 21:09:00.515161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.808 [2024-07-14 21:09:00.515188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.808 [2024-07-14 21:09:00.515222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:03.808 [2024-07-14 21:09:00.515237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.808 #65 NEW cov: 12112 ft: 15108 corp: 39/1323b lim: 100 exec/s: 65 rss: 72Mb L: 48/92 MS: 1 InsertRepeatedBytes- 00:08:03.808 [2024-07-14 21:09:00.555135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:03.808 [2024-07-14 21:09:00.555162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.808 #66 NEW cov: 12112 ft: 15117 corp: 40/1353b lim: 100 exec/s: 33 rss: 72Mb L: 30/92 MS: 1 ChangeBinInt- 00:08:03.808 #66 DONE cov: 12112 ft: 15117 corp: 40/1353b lim: 100 exec/s: 33 rss: 72Mb 00:08:03.808 ###### Recommended dictionary. ###### 00:08:03.808 "\001\003" # Uses: 1 00:08:03.808 "\20320\031_\202*\000" # Uses: 0 00:08:03.808 "\377\377\000\031" # Uses: 1 00:08:03.808 ###### End of recommended dictionary. ###### 00:08:03.808 Done 66 runs in 2 second(s) 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:03.808 21:09:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:04.067 [2024-07-14 21:09:00.728321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:04.067 [2024-07-14 21:09:00.728394] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025578 ] 00:08:04.068 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.068 [2024-07-14 21:09:00.915745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.068 [2024-07-14 21:09:00.937684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.326 [2024-07-14 21:09:00.989863] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.327 [2024-07-14 21:09:01.006181] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:04.327 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.327 INFO: Seed: 1172032448 00:08:04.327 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:04.327 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:04.327 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:04.327 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.327 #2 INITED exec/s: 0 rss: 62Mb 00:08:04.327 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.327 This may also happen if the target rejected all inputs we tried so far 00:08:04.327 [2024-07-14 21:09:01.051504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:04.327 [2024-07-14 21:09:01.051536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.327 [2024-07-14 21:09:01.051594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:04.327 [2024-07-14 21:09:01.051614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.327 [2024-07-14 21:09:01.051670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:04.327 [2024-07-14 21:09:01.051684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.585 NEW_FUNC[1/691]: 0x4b40a0 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:04.585 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:04.585 #6 NEW cov: 11846 ft: 11846 corp: 2/37b lim: 50 exec/s: 0 rss: 69Mb L: 36/36 MS: 4 CopyPart-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:04.585 [2024-07-14 21:09:01.372277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:04.585 [2024-07-14 21:09:01.372314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.585 [2024-07-14 21:09:01.372369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:04.585 [2024-07-14 21:09:01.372386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.585 [2024-07-14 21:09:01.372439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:04.585 [2024-07-14 21:09:01.372459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.585 #7 NEW cov: 11976 ft: 12266 corp: 3/73b lim: 50 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 ChangeByte- 00:08:04.585 [2024-07-14 21:09:01.422349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:04.586 [2024-07-14 21:09:01.422379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.586 [2024-07-14 21:09:01.422416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:04.586 [2024-07-14 21:09:01.422432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.586 [2024-07-14 21:09:01.422492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:04.586 [2024-07-14 21:09:01.422508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.586 #17 NEW cov: 11982 ft: 12687 corp: 4/109b lim: 50 exec/s: 0 rss: 69Mb L: 36/36 MS: 5 ShuffleBytes-ChangeBit-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:08:04.586 [2024-07-14 21:09:01.462413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:04.586 [2024-07-14 21:09:01.462447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.586 [2024-07-14 21:09:01.462483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:04.586 [2024-07-14 21:09:01.462499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.586 [2024-07-14 21:09:01.462552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:36 len:1 00:08:04.586 [2024-07-14 21:09:01.462569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.586 #18 NEW cov: 12067 ft: 12881 corp: 5/145b lim: 50 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 ChangeBinInt- 00:08:04.844 [2024-07-14 21:09:01.502536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:04.844 [2024-07-14 21:09:01.502565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.844 [2024-07-14 21:09:01.502601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:138538465099776 len:1 00:08:04.844 [2024-07-14 21:09:01.502632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.844 [2024-07-14 21:09:01.502684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:04.844 [2024-07-14 21:09:01.502700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.844 #19 NEW cov: 12067 ft: 12955 corp: 6/182b lim: 50 exec/s: 0 rss: 69Mb L: 37/37 MS: 1 InsertByte- 00:08:04.844 [2024-07-14 21:09:01.552803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:04.844 [2024-07-14 21:09:01.552832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.844 [2024-07-14 21:09:01.552872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:04.844 [2024-07-14 21:09:01.552887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.552940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2113929216 len:1 00:08:04.845 [2024-07-14 21:09:01.552956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.553009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.553024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.845 #20 NEW cov: 12067 ft: 13317 corp: 7/223b lim: 50 exec/s: 0 rss: 69Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:08:04.845 [2024-07-14 21:09:01.602932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:04.845 [2024-07-14 21:09:01.602961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.603001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095216660480 len:65536 00:08:04.845 [2024-07-14 21:09:01.603017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.603070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967040 len:1 00:08:04.845 [2024-07-14 21:09:01.603086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.603137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.603152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.845 #21 NEW cov: 12067 ft: 13395 corp: 8/265b lim: 50 exec/s: 0 rss: 70Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:08:04.845 [2024-07-14 21:09:01.652865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:04.845 [2024-07-14 21:09:01.652894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.652948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.652966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.845 #22 NEW cov: 12067 ft: 13714 corp: 9/289b lim: 50 exec/s: 0 rss: 70Mb L: 24/42 MS: 1 EraseBytes- 00:08:04.845 [2024-07-14 21:09:01.703200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:04.845 [2024-07-14 21:09:01.703232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.703266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.703282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.703333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.703351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.703403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.703419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.845 #23 NEW cov: 12067 ft: 13739 corp: 10/333b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 CopyPart- 00:08:04.845 [2024-07-14 21:09:01.743475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:04.845 [2024-07-14 21:09:01.743504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.743552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095216660480 len:65536 00:08:04.845 [2024-07-14 21:09:01.743570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.743621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967040 len:1 00:08:04.845 [2024-07-14 21:09:01.743638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.743689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.743703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.845 [2024-07-14 21:09:01.743754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:08:04.845 [2024-07-14 21:09:01.743771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.104 #24 NEW cov: 12067 ft: 13867 corp: 11/383b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:05.104 [2024-07-14 21:09:01.793480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:05.104 [2024-07-14 21:09:01.793508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.793561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.793577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.793630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.793647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.793699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.793714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.105 #25 NEW cov: 12067 ft: 13893 corp: 12/425b lim: 50 exec/s: 0 rss: 70Mb L: 42/50 MS: 1 CopyPart- 00:08:05.105 [2024-07-14 21:09:01.843548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:05.105 [2024-07-14 21:09:01.843577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.843612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.843629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.843681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.843697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.105 #26 NEW cov: 12067 ft: 13912 corp: 13/458b lim: 50 exec/s: 0 rss: 70Mb L: 33/50 MS: 1 EraseBytes- 00:08:05.105 [2024-07-14 21:09:01.883622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:05.105 [2024-07-14 21:09:01.883650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.883687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:65536 len:1 00:08:05.105 [2024-07-14 21:09:01.883704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.883757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.883773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.105 #27 NEW cov: 12067 ft: 13996 corp: 14/494b lim: 50 exec/s: 0 rss: 70Mb L: 36/50 MS: 1 ChangeBit- 00:08:05.105 [2024-07-14 21:09:01.923837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1241645056 len:1 00:08:05.105 [2024-07-14 21:09:01.923865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.923912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.923929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.923982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.923998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.924052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.924069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.105 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:05.105 #28 NEW cov: 12090 ft: 14122 corp: 15/539b lim: 50 exec/s: 0 rss: 70Mb L: 45/50 MS: 1 InsertByte- 00:08:05.105 [2024-07-14 21:09:01.973886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:05.105 [2024-07-14 21:09:01.973915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.973950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.105 [2024-07-14 21:09:01.973966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.105 [2024-07-14 21:09:01.974025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:603979776 len:1 00:08:05.105 [2024-07-14 21:09:01.974041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.364 #29 NEW cov: 12090 ft: 14167 corp: 16/572b lim: 50 exec/s: 0 rss: 70Mb L: 33/50 MS: 1 EraseBytes- 00:08:05.364 [2024-07-14 21:09:02.023952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:05.364 [2024-07-14 21:09:02.023980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.024032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16140901064495857664 len:1 00:08:05.364 [2024-07-14 21:09:02.024048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.364 #30 NEW cov: 12090 ft: 14168 corp: 17/596b lim: 50 exec/s: 30 rss: 70Mb L: 24/50 MS: 1 ChangeByte- 00:08:05.364 [2024-07-14 21:09:02.064176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:05.364 [2024-07-14 21:09:02.064204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.064241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.364 [2024-07-14 21:09:02.064260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.064314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1761607680 len:1 00:08:05.364 [2024-07-14 21:09:02.064329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.364 #31 NEW cov: 12090 ft: 14174 corp: 18/629b lim: 50 exec/s: 31 rss: 70Mb L: 33/50 MS: 1 ChangeByte- 00:08:05.364 [2024-07-14 21:09:02.114387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:05.364 [2024-07-14 21:09:02.114414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.114465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.364 [2024-07-14 21:09:02.114481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.114535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:603979776 len:1 00:08:05.364 [2024-07-14 21:09:02.114551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.114604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.364 [2024-07-14 21:09:02.114620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.364 #32 NEW cov: 12090 ft: 14214 corp: 19/677b lim: 50 exec/s: 32 rss: 70Mb L: 48/50 MS: 1 CopyPart- 00:08:05.364 [2024-07-14 21:09:02.164565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:29802945839104 len:6940 00:08:05.364 [2024-07-14 21:09:02.164593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.164634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1953184550663953179 len:1 00:08:05.364 [2024-07-14 21:09:02.164653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.164707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.364 [2024-07-14 21:09:02.164722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.164778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2359296 len:1 00:08:05.364 [2024-07-14 21:09:02.164795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.364 #33 NEW cov: 12090 ft: 14235 corp: 20/721b lim: 50 exec/s: 33 rss: 70Mb L: 44/50 MS: 1 InsertRepeatedBytes- 00:08:05.364 [2024-07-14 21:09:02.204457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1025 00:08:05.364 [2024-07-14 21:09:02.204484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.364 [2024-07-14 21:09:02.204539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.364 [2024-07-14 21:09:02.204557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.364 #34 NEW cov: 12090 ft: 14265 corp: 21/745b lim: 50 exec/s: 34 rss: 70Mb L: 24/50 MS: 1 ChangeBit- 00:08:05.364 [2024-07-14 21:09:02.244770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:29802945839104 len:6940 00:08:05.365 [2024-07-14 21:09:02.244798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.365 [2024-07-14 21:09:02.244839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1953184550663953179 len:1 00:08:05.365 [2024-07-14 21:09:02.244855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.365 [2024-07-14 21:09:02.244909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.365 [2024-07-14 21:09:02.244925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.365 [2024-07-14 21:09:02.244977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2359296 len:1 00:08:05.365 [2024-07-14 21:09:02.244994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.624 #35 NEW cov: 12090 ft: 14293 corp: 22/789b lim: 50 exec/s: 35 rss: 70Mb L: 44/50 MS: 1 ChangeBit- 00:08:05.624 [2024-07-14 21:09:02.294775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:05.624 [2024-07-14 21:09:02.294803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.294843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:9473 00:08:05.624 [2024-07-14 21:09:02.294860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.294914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.624 [2024-07-14 21:09:02.294930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.624 #36 NEW cov: 12090 ft: 14322 corp: 23/826b lim: 50 exec/s: 36 rss: 70Mb L: 37/50 MS: 1 InsertByte- 00:08:05.624 [2024-07-14 21:09:02.335099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:05.624 [2024-07-14 21:09:02.335130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.335172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095216660480 len:65536 00:08:05.624 [2024-07-14 21:09:02.335189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.335244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967040 len:1 00:08:05.624 [2024-07-14 21:09:02.335259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.335311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.624 [2024-07-14 21:09:02.335327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.335381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:08:05.624 [2024-07-14 21:09:02.335396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.624 #37 NEW cov: 12090 ft: 14332 corp: 24/876b lim: 50 exec/s: 37 rss: 70Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:08:05.624 [2024-07-14 21:09:02.375117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:05.624 [2024-07-14 21:09:02.375144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.375184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.624 [2024-07-14 21:09:02.375201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.375256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:989855744 len:1 00:08:05.624 [2024-07-14 21:09:02.375271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.375326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.624 [2024-07-14 21:09:02.375342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.624 #38 NEW cov: 12090 ft: 14355 corp: 25/921b lim: 50 exec/s: 38 rss: 70Mb L: 45/50 MS: 1 InsertByte- 00:08:05.624 [2024-07-14 21:09:02.415002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15996785876587774206 len:1025 00:08:05.624 [2024-07-14 21:09:02.415030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.415071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:05.624 [2024-07-14 21:09:02.415088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.624 #39 NEW cov: 12090 ft: 14380 corp: 26/945b lim: 50 exec/s: 39 rss: 70Mb L: 24/50 MS: 1 ChangeBinInt- 00:08:05.624 [2024-07-14 21:09:02.465401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14468034564414505214 len:51401 00:08:05.624 [2024-07-14 21:09:02.465430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.465471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:08:05.624 [2024-07-14 21:09:02.465492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.465544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:56833 00:08:05.624 [2024-07-14 21:09:02.465560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.465613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1024 len:1 00:08:05.624 [2024-07-14 21:09:02.465631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.624 #40 NEW cov: 12090 ft: 14389 corp: 27/993b lim: 50 exec/s: 40 rss: 70Mb L: 48/50 MS: 1 InsertRepeatedBytes- 00:08:05.624 [2024-07-14 21:09:02.515457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:05.624 [2024-07-14 21:09:02.515486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.515521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:553648128 len:1 00:08:05.624 [2024-07-14 21:09:02.515535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.624 [2024-07-14 21:09:02.515590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.624 [2024-07-14 21:09:02.515607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.883 #41 NEW cov: 12090 ft: 14402 corp: 28/1030b lim: 50 exec/s: 41 rss: 70Mb L: 37/50 MS: 1 CrossOver- 00:08:05.883 [2024-07-14 21:09:02.555566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:05.883 [2024-07-14 21:09:02.555594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.883 [2024-07-14 21:09:02.555630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3573547008 len:1 00:08:05.883 [2024-07-14 21:09:02.555646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.883 [2024-07-14 21:09:02.555701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2359296 len:1 00:08:05.883 [2024-07-14 21:09:02.555715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.883 #42 NEW cov: 12090 ft: 14412 corp: 29/1064b lim: 50 exec/s: 42 rss: 70Mb L: 34/50 MS: 1 InsertByte- 00:08:05.883 [2024-07-14 21:09:02.595786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:1 00:08:05.883 [2024-07-14 21:09:02.595815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.883 [2024-07-14 21:09:02.595851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:127 00:08:05.883 [2024-07-14 21:09:02.595868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.883 [2024-07-14 21:09:02.595919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:05.883 [2024-07-14 21:09:02.595934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.883 [2024-07-14 21:09:02.595985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.883 [2024-07-14 21:09:02.596000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.883 #43 NEW cov: 12090 ft: 14453 corp: 30/1104b lim: 50 exec/s: 43 rss: 70Mb L: 40/50 MS: 1 EraseBytes- 00:08:05.883 [2024-07-14 21:09:02.635528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772170 len:1 00:08:05.883 [2024-07-14 21:09:02.635557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.883 #44 NEW cov: 12090 ft: 14813 corp: 31/1117b lim: 50 exec/s: 44 rss: 70Mb L: 13/50 MS: 1 CrossOver- 00:08:05.884 [2024-07-14 21:09:02.676023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:1 00:08:05.884 [2024-07-14 21:09:02.676050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.676093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095216660480 len:65536 00:08:05.884 [2024-07-14 21:09:02.676110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.676161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4280942336 len:1 00:08:05.884 [2024-07-14 21:09:02.676177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.676229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.884 [2024-07-14 21:09:02.676245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.884 #45 NEW cov: 12090 ft: 14822 corp: 32/1159b lim: 50 exec/s: 45 rss: 70Mb L: 42/50 MS: 1 ChangeByte- 00:08:05.884 [2024-07-14 21:09:02.716033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772170 len:13622 00:08:05.884 [2024-07-14 21:09:02.716062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.716097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3834029160418063669 len:13622 00:08:05.884 [2024-07-14 21:09:02.716115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.716171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3834029160418063669 len:1 00:08:05.884 [2024-07-14 21:09:02.716187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.884 #46 NEW cov: 12090 ft: 14847 corp: 33/1192b lim: 50 exec/s: 46 rss: 70Mb L: 33/50 MS: 1 InsertRepeatedBytes- 00:08:05.884 [2024-07-14 21:09:02.766320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:05.884 [2024-07-14 21:09:02.766348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.766394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:131072 len:1 00:08:05.884 [2024-07-14 21:09:02.766411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.766464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:603979776 len:1 00:08:05.884 [2024-07-14 21:09:02.766480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.884 [2024-07-14 21:09:02.766531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:05.884 [2024-07-14 21:09:02.766550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.142 #47 NEW cov: 12090 ft: 14867 corp: 34/1240b lim: 50 exec/s: 47 rss: 70Mb L: 48/50 MS: 1 ChangeBit- 00:08:06.142 [2024-07-14 21:09:02.816179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2377900603419394048 len:65 00:08:06.142 [2024-07-14 21:09:02.816209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.816249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16140901064495857664 len:1 00:08:06.142 [2024-07-14 21:09:02.816266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 #48 NEW cov: 12090 ft: 14922 corp: 35/1264b lim: 50 exec/s: 48 rss: 71Mb L: 24/50 MS: 1 ChangeBit- 00:08:06.142 [2024-07-14 21:09:02.866541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:06.142 [2024-07-14 21:09:02.866570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.866614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:131072 len:1 00:08:06.142 [2024-07-14 21:09:02.866631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.866683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:620756992 len:1 00:08:06.142 [2024-07-14 21:09:02.866699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.866752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:06.142 [2024-07-14 21:09:02.866768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.142 #49 NEW cov: 12090 ft: 14933 corp: 36/1312b lim: 50 exec/s: 49 rss: 71Mb L: 48/50 MS: 1 ChangeBit- 00:08:06.142 [2024-07-14 21:09:02.916604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772170 len:13622 00:08:06.142 [2024-07-14 21:09:02.916632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.916668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3834029160418063669 len:13622 00:08:06.142 [2024-07-14 21:09:02.916683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.916735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3834029160418063669 len:1 00:08:06.142 [2024-07-14 21:09:02.916752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.142 #50 NEW cov: 12090 ft: 14940 corp: 37/1345b lim: 50 exec/s: 50 rss: 71Mb L: 33/50 MS: 1 ShuffleBytes- 00:08:06.142 [2024-07-14 21:09:02.966687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:08:06.142 [2024-07-14 21:09:02.966716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.966752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:06.142 [2024-07-14 21:09:02.966768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:02.966822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:603979776 len:1 00:08:06.142 [2024-07-14 21:09:02.966841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.142 #51 NEW cov: 12090 ft: 14942 corp: 38/1378b lim: 50 exec/s: 51 rss: 71Mb L: 33/50 MS: 1 CrossOver- 00:08:06.142 [2024-07-14 21:09:03.006922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:65536 00:08:06.142 [2024-07-14 21:09:03.006951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:03.007000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294836224 len:1 00:08:06.142 [2024-07-14 21:09:03.007017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:03.007073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:06.142 [2024-07-14 21:09:03.007092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.142 [2024-07-14 21:09:03.007144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:06.142 [2024-07-14 21:09:03.007159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.402 [2024-07-14 21:09:03.046844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:33554432 len:65536 00:08:06.402 [2024-07-14 21:09:03.046872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.402 [2024-07-14 21:09:03.046926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294836224 len:1 00:08:06.402 [2024-07-14 21:09:03.046943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.402 #53 NEW cov: 12090 ft: 15018 corp: 39/1400b lim: 50 exec/s: 26 rss: 71Mb L: 22/50 MS: 2 ChangeBinInt-EraseBytes- 00:08:06.402 #53 DONE cov: 12090 ft: 15018 corp: 39/1400b lim: 50 exec/s: 26 rss: 71Mb 00:08:06.402 Done 53 runs in 2 second(s) 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:06.402 21:09:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:06.402 [2024-07-14 21:09:03.204095] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:06.402 [2024-07-14 21:09:03.204149] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026187 ] 00:08:06.402 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.660 [2024-07-14 21:09:03.379342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.660 [2024-07-14 21:09:03.401123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.660 [2024-07-14 21:09:03.453426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.660 [2024-07-14 21:09:03.469746] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:06.660 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.660 INFO: Seed: 3637027165 00:08:06.660 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:06.660 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:06.660 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:06.660 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.660 #2 INITED exec/s: 0 rss: 63Mb 00:08:06.660 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.660 This may also happen if the target rejected all inputs we tried so far 00:08:06.660 [2024-07-14 21:09:03.514449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:06.660 [2024-07-14 21:09:03.514486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.660 [2024-07-14 21:09:03.514522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:06.660 [2024-07-14 21:09:03.514541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.660 [2024-07-14 21:09:03.514571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:06.660 [2024-07-14 21:09:03.514587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.177 NEW_FUNC[1/693]: 0x4b5c60 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:07.177 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.177 #4 NEW cov: 11900 ft: 11905 corp: 2/57b lim: 90 exec/s: 0 rss: 69Mb L: 56/56 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:07.177 [2024-07-14 21:09:03.855215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.177 [2024-07-14 21:09:03.855259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.177 [2024-07-14 21:09:03.855295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.178 [2024-07-14 21:09:03.855312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.178 #11 NEW cov: 12034 ft: 12674 corp: 3/95b lim: 90 exec/s: 0 rss: 69Mb L: 38/56 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:07.178 [2024-07-14 21:09:03.915328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.178 [2024-07-14 21:09:03.915364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.178 [2024-07-14 21:09:03.915398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.178 [2024-07-14 21:09:03.915416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.178 [2024-07-14 21:09:03.915453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.178 [2024-07-14 21:09:03.915469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.178 #12 NEW cov: 12040 ft: 13040 corp: 4/151b lim: 90 exec/s: 0 rss: 69Mb L: 56/56 MS: 1 CMP- DE: "\000\003"- 00:08:07.178 [2024-07-14 21:09:03.995429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.178 [2024-07-14 21:09:03.995469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.178 [2024-07-14 21:09:03.995504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.178 [2024-07-14 21:09:03.995522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.178 #13 NEW cov: 12125 ft: 13349 corp: 5/189b lim: 90 exec/s: 0 rss: 69Mb L: 38/56 MS: 1 ChangeBinInt- 00:08:07.178 [2024-07-14 21:09:04.075705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.178 [2024-07-14 21:09:04.075738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.178 [2024-07-14 21:09:04.075773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.178 [2024-07-14 21:09:04.075792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.437 #14 NEW cov: 12125 ft: 13419 corp: 6/227b lim: 90 exec/s: 0 rss: 70Mb L: 38/56 MS: 1 ShuffleBytes- 00:08:07.437 [2024-07-14 21:09:04.155820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.437 [2024-07-14 21:09:04.155853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.437 #15 NEW cov: 12125 ft: 14337 corp: 7/258b lim: 90 exec/s: 0 rss: 70Mb L: 31/56 MS: 1 EraseBytes- 00:08:07.437 [2024-07-14 21:09:04.216062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.437 [2024-07-14 21:09:04.216095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.437 [2024-07-14 21:09:04.216129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.437 [2024-07-14 21:09:04.216148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.437 #16 NEW cov: 12125 ft: 14482 corp: 8/296b lim: 90 exec/s: 0 rss: 70Mb L: 38/56 MS: 1 ChangeBinInt- 00:08:07.437 [2024-07-14 21:09:04.296249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.437 [2024-07-14 21:09:04.296284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.437 [2024-07-14 21:09:04.296319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.437 [2024-07-14 21:09:04.296337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.437 #17 NEW cov: 12125 ft: 14525 corp: 9/334b lim: 90 exec/s: 0 rss: 70Mb L: 38/56 MS: 1 ChangeByte- 00:08:07.696 [2024-07-14 21:09:04.346430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.696 [2024-07-14 21:09:04.346472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.696 [2024-07-14 21:09:04.346508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.696 [2024-07-14 21:09:04.346527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.697 [2024-07-14 21:09:04.346559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.697 [2024-07-14 21:09:04.346577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.697 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:07.697 #18 NEW cov: 12142 ft: 14560 corp: 10/390b lim: 90 exec/s: 0 rss: 70Mb L: 56/56 MS: 1 ShuffleBytes- 00:08:07.697 [2024-07-14 21:09:04.426590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.697 [2024-07-14 21:09:04.426632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.697 [2024-07-14 21:09:04.426665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.697 [2024-07-14 21:09:04.426682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.697 #24 NEW cov: 12142 ft: 14621 corp: 11/428b lim: 90 exec/s: 0 rss: 70Mb L: 38/56 MS: 1 ShuffleBytes- 00:08:07.697 [2024-07-14 21:09:04.506825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.697 [2024-07-14 21:09:04.506856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.697 [2024-07-14 21:09:04.506889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.697 [2024-07-14 21:09:04.506907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.697 [2024-07-14 21:09:04.506935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.697 [2024-07-14 21:09:04.506951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.697 #25 NEW cov: 12142 ft: 14659 corp: 12/484b lim: 90 exec/s: 25 rss: 70Mb L: 56/56 MS: 1 ChangeBinInt- 00:08:07.697 [2024-07-14 21:09:04.556874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.697 [2024-07-14 21:09:04.556906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.956 #26 NEW cov: 12142 ft: 14683 corp: 13/515b lim: 90 exec/s: 26 rss: 70Mb L: 31/56 MS: 1 EraseBytes- 00:08:07.956 [2024-07-14 21:09:04.617063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.956 [2024-07-14 21:09:04.617094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.956 [2024-07-14 21:09:04.617128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.956 [2024-07-14 21:09:04.617144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.956 #27 NEW cov: 12142 ft: 14691 corp: 14/553b lim: 90 exec/s: 27 rss: 70Mb L: 38/56 MS: 1 PersAutoDict- DE: "\000\003"- 00:08:07.956 [2024-07-14 21:09:04.667129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.956 [2024-07-14 21:09:04.667159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.956 #28 NEW cov: 12142 ft: 14708 corp: 15/584b lim: 90 exec/s: 28 rss: 70Mb L: 31/56 MS: 1 CopyPart- 00:08:07.956 [2024-07-14 21:09:04.747451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.956 [2024-07-14 21:09:04.747481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.956 [2024-07-14 21:09:04.747513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.956 [2024-07-14 21:09:04.747530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.956 [2024-07-14 21:09:04.747558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.956 [2024-07-14 21:09:04.747573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.956 #29 NEW cov: 12142 ft: 14727 corp: 16/644b lim: 90 exec/s: 29 rss: 70Mb L: 60/60 MS: 1 CopyPart- 00:08:07.956 [2024-07-14 21:09:04.797626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.956 [2024-07-14 21:09:04.797655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.956 [2024-07-14 21:09:04.797687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:07.956 [2024-07-14 21:09:04.797703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.956 [2024-07-14 21:09:04.797732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:07.956 [2024-07-14 21:09:04.797748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.956 #30 NEW cov: 12142 ft: 14751 corp: 17/700b lim: 90 exec/s: 30 rss: 70Mb L: 56/60 MS: 1 ChangeBit- 00:08:07.956 [2024-07-14 21:09:04.847579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:07.956 [2024-07-14 21:09:04.847610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.215 #31 NEW cov: 12142 ft: 14799 corp: 18/731b lim: 90 exec/s: 31 rss: 70Mb L: 31/60 MS: 1 CopyPart- 00:08:08.215 [2024-07-14 21:09:04.927918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.215 [2024-07-14 21:09:04.927948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.215 [2024-07-14 21:09:04.927980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.215 [2024-07-14 21:09:04.927998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.215 [2024-07-14 21:09:04.928027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.215 [2024-07-14 21:09:04.928042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.215 #32 NEW cov: 12142 ft: 14814 corp: 19/787b lim: 90 exec/s: 32 rss: 70Mb L: 56/60 MS: 1 ChangeBit- 00:08:08.215 [2024-07-14 21:09:04.978050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.215 [2024-07-14 21:09:04.978080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.215 [2024-07-14 21:09:04.978112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.215 [2024-07-14 21:09:04.978129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.215 [2024-07-14 21:09:04.978162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.215 [2024-07-14 21:09:04.978179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.215 #33 NEW cov: 12142 ft: 14819 corp: 20/843b lim: 90 exec/s: 33 rss: 70Mb L: 56/60 MS: 1 ChangeBit- 00:08:08.215 [2024-07-14 21:09:05.058180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.215 [2024-07-14 21:09:05.058215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.475 #34 NEW cov: 12142 ft: 14830 corp: 21/874b lim: 90 exec/s: 34 rss: 70Mb L: 31/60 MS: 1 ChangeASCIIInt- 00:08:08.475 [2024-07-14 21:09:05.138424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.475 [2024-07-14 21:09:05.138460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.475 [2024-07-14 21:09:05.138494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.475 [2024-07-14 21:09:05.138511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.475 #35 NEW cov: 12142 ft: 14842 corp: 22/917b lim: 90 exec/s: 35 rss: 70Mb L: 43/60 MS: 1 EraseBytes- 00:08:08.475 [2024-07-14 21:09:05.218684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.475 [2024-07-14 21:09:05.218713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.475 [2024-07-14 21:09:05.218744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.475 [2024-07-14 21:09:05.218761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.475 [2024-07-14 21:09:05.218790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.475 [2024-07-14 21:09:05.218805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.475 #36 NEW cov: 12142 ft: 14875 corp: 23/973b lim: 90 exec/s: 36 rss: 70Mb L: 56/60 MS: 1 CrossOver- 00:08:08.475 [2024-07-14 21:09:05.298866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.475 [2024-07-14 21:09:05.298896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.475 [2024-07-14 21:09:05.298930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.475 [2024-07-14 21:09:05.298948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.475 #37 NEW cov: 12142 ft: 14902 corp: 24/1014b lim: 90 exec/s: 37 rss: 70Mb L: 41/60 MS: 1 InsertRepeatedBytes- 00:08:08.475 [2024-07-14 21:09:05.348991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.475 [2024-07-14 21:09:05.349027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.475 [2024-07-14 21:09:05.349059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.475 [2024-07-14 21:09:05.349076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.475 [2024-07-14 21:09:05.349104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:08.475 [2024-07-14 21:09:05.349119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.735 #38 NEW cov: 12142 ft: 14915 corp: 25/1078b lim: 90 exec/s: 38 rss: 70Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:08:08.735 [2024-07-14 21:09:05.399068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.735 [2024-07-14 21:09:05.399098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.735 [2024-07-14 21:09:05.399131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.735 [2024-07-14 21:09:05.399148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.735 #39 NEW cov: 12149 ft: 14945 corp: 26/1119b lim: 90 exec/s: 39 rss: 70Mb L: 41/64 MS: 1 ChangeBit- 00:08:08.735 [2024-07-14 21:09:05.479303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.735 [2024-07-14 21:09:05.479333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.735 [2024-07-14 21:09:05.479366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:08.735 [2024-07-14 21:09:05.479383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.735 [2024-07-14 21:09:05.529338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:08.735 [2024-07-14 21:09:05.529368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.735 #41 NEW cov: 12149 ft: 14979 corp: 27/1137b lim: 90 exec/s: 20 rss: 71Mb L: 18/64 MS: 2 CopyPart-CrossOver- 00:08:08.735 #41 DONE cov: 12149 ft: 14979 corp: 27/1137b lim: 90 exec/s: 20 rss: 71Mb 00:08:08.735 ###### Recommended dictionary. ###### 00:08:08.735 "\000\003" # Uses: 3 00:08:08.735 ###### End of recommended dictionary. ###### 00:08:08.735 Done 41 runs in 2 second(s) 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:08.995 21:09:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:08.995 [2024-07-14 21:09:05.712955] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:08.995 [2024-07-14 21:09:05.713023] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026890 ] 00:08:08.995 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.255 [2024-07-14 21:09:05.899684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.255 [2024-07-14 21:09:05.922794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.255 [2024-07-14 21:09:05.975356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.255 [2024-07-14 21:09:05.991711] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:09.255 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.255 INFO: Seed: 1862074356 00:08:09.255 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:09.255 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:09.255 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:09.255 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.255 #2 INITED exec/s: 0 rss: 63Mb 00:08:09.255 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.255 This may also happen if the target rejected all inputs we tried so far 00:08:09.255 [2024-07-14 21:09:06.041130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.255 [2024-07-14 21:09:06.041162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.255 [2024-07-14 21:09:06.041230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.255 [2024-07-14 21:09:06.041251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.255 [2024-07-14 21:09:06.041317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.255 [2024-07-14 21:09:06.041341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.255 [2024-07-14 21:09:06.041406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.255 [2024-07-14 21:09:06.041426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.514 NEW_FUNC[1/693]: 0x4b8e80 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:09.514 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.514 #3 NEW cov: 11879 ft: 11877 corp: 2/48b lim: 50 exec/s: 0 rss: 69Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:08:09.514 [2024-07-14 21:09:06.371889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.514 [2024-07-14 21:09:06.371926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.514 [2024-07-14 21:09:06.371995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.514 [2024-07-14 21:09:06.372016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.514 [2024-07-14 21:09:06.372097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.514 [2024-07-14 21:09:06.372118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.514 #4 NEW cov: 12009 ft: 12813 corp: 3/86b lim: 50 exec/s: 0 rss: 69Mb L: 38/47 MS: 1 EraseBytes- 00:08:09.774 [2024-07-14 21:09:06.422077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.774 [2024-07-14 21:09:06.422108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.422174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.774 [2024-07-14 21:09:06.422196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.422261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.774 [2024-07-14 21:09:06.422284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.422347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.774 [2024-07-14 21:09:06.422366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.774 #5 NEW cov: 12015 ft: 13054 corp: 4/133b lim: 50 exec/s: 0 rss: 69Mb L: 47/47 MS: 1 ShuffleBytes- 00:08:09.774 [2024-07-14 21:09:06.462225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.774 [2024-07-14 21:09:06.462253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.462317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.774 [2024-07-14 21:09:06.462339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.462404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.774 [2024-07-14 21:09:06.462422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.462494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.774 [2024-07-14 21:09:06.462514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.774 #6 NEW cov: 12100 ft: 13332 corp: 5/180b lim: 50 exec/s: 0 rss: 69Mb L: 47/47 MS: 1 ChangeBit- 00:08:09.774 [2024-07-14 21:09:06.502337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.774 [2024-07-14 21:09:06.502366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.502433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.774 [2024-07-14 21:09:06.502459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.502524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.774 [2024-07-14 21:09:06.502543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.502607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.774 [2024-07-14 21:09:06.502626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.774 #7 NEW cov: 12100 ft: 13389 corp: 6/227b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 ChangeByte- 00:08:09.774 [2024-07-14 21:09:06.542418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.774 [2024-07-14 21:09:06.542450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.542519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.774 [2024-07-14 21:09:06.542541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.542607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.774 [2024-07-14 21:09:06.542628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.542693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.774 [2024-07-14 21:09:06.542712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.774 #8 NEW cov: 12100 ft: 13469 corp: 7/274b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 ChangeBinInt- 00:08:09.774 [2024-07-14 21:09:06.592572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.774 [2024-07-14 21:09:06.592601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.592664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.774 [2024-07-14 21:09:06.592684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.592750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.774 [2024-07-14 21:09:06.592770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.592835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.774 [2024-07-14 21:09:06.592854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.774 #9 NEW cov: 12100 ft: 13506 corp: 8/322b lim: 50 exec/s: 0 rss: 70Mb L: 48/48 MS: 1 InsertByte- 00:08:09.774 [2024-07-14 21:09:06.632683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:09.774 [2024-07-14 21:09:06.632710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.632773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:09.774 [2024-07-14 21:09:06.632810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.632877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:09.774 [2024-07-14 21:09:06.632900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.774 [2024-07-14 21:09:06.632965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:09.774 [2024-07-14 21:09:06.632984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.774 #10 NEW cov: 12100 ft: 13561 corp: 9/370b lim: 50 exec/s: 0 rss: 70Mb L: 48/48 MS: 1 ShuffleBytes- 00:08:10.034 [2024-07-14 21:09:06.682846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.034 [2024-07-14 21:09:06.682874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.682933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.034 [2024-07-14 21:09:06.682957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.683023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.034 [2024-07-14 21:09:06.683047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.683114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.034 [2024-07-14 21:09:06.683133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.034 #11 NEW cov: 12100 ft: 13589 corp: 10/418b lim: 50 exec/s: 0 rss: 70Mb L: 48/48 MS: 1 InsertByte- 00:08:10.034 [2024-07-14 21:09:06.732784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.034 [2024-07-14 21:09:06.732811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.732875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.034 [2024-07-14 21:09:06.732895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.732962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.034 [2024-07-14 21:09:06.732983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.034 #12 NEW cov: 12100 ft: 13655 corp: 11/456b lim: 50 exec/s: 0 rss: 70Mb L: 38/48 MS: 1 ShuffleBytes- 00:08:10.034 [2024-07-14 21:09:06.782922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.034 [2024-07-14 21:09:06.782950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.783010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.034 [2024-07-14 21:09:06.783031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.783100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.034 [2024-07-14 21:09:06.783120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.034 #13 NEW cov: 12100 ft: 13698 corp: 12/493b lim: 50 exec/s: 0 rss: 70Mb L: 37/48 MS: 1 EraseBytes- 00:08:10.034 [2024-07-14 21:09:06.833093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.034 [2024-07-14 21:09:06.833122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.833187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.034 [2024-07-14 21:09:06.833208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.833274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.034 [2024-07-14 21:09:06.833293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.034 #14 NEW cov: 12100 ft: 13725 corp: 13/531b lim: 50 exec/s: 0 rss: 70Mb L: 38/48 MS: 1 ShuffleBytes- 00:08:10.034 [2024-07-14 21:09:06.883253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.034 [2024-07-14 21:09:06.883280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.883353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.034 [2024-07-14 21:09:06.883375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.034 [2024-07-14 21:09:06.883449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.035 [2024-07-14 21:09:06.883472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.035 #15 NEW cov: 12100 ft: 13756 corp: 14/569b lim: 50 exec/s: 0 rss: 70Mb L: 38/48 MS: 1 ChangeBinInt- 00:08:10.035 [2024-07-14 21:09:06.923529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.035 [2024-07-14 21:09:06.923556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.035 [2024-07-14 21:09:06.923617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.035 [2024-07-14 21:09:06.923637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.035 [2024-07-14 21:09:06.923703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.035 [2024-07-14 21:09:06.923724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.035 [2024-07-14 21:09:06.923789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.035 [2024-07-14 21:09:06.923808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.306 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:10.306 #16 NEW cov: 12123 ft: 13792 corp: 15/617b lim: 50 exec/s: 0 rss: 70Mb L: 48/48 MS: 1 ChangeBit- 00:08:10.306 [2024-07-14 21:09:06.963445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.306 [2024-07-14 21:09:06.963474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.306 [2024-07-14 21:09:06.963537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.307 [2024-07-14 21:09:06.963558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:06.963624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.307 [2024-07-14 21:09:06.963644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.307 #17 NEW cov: 12123 ft: 13807 corp: 16/655b lim: 50 exec/s: 0 rss: 70Mb L: 38/48 MS: 1 ShuffleBytes- 00:08:10.307 [2024-07-14 21:09:07.013901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.307 [2024-07-14 21:09:07.013929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.013987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.307 [2024-07-14 21:09:07.014008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.014077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.307 [2024-07-14 21:09:07.014095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.014160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.307 [2024-07-14 21:09:07.014182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.014248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:10.307 [2024-07-14 21:09:07.014267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:10.307 #18 NEW cov: 12123 ft: 13867 corp: 17/705b lim: 50 exec/s: 18 rss: 70Mb L: 50/50 MS: 1 CrossOver- 00:08:10.307 [2024-07-14 21:09:07.063435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.307 [2024-07-14 21:09:07.063467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.307 #20 NEW cov: 12123 ft: 14705 corp: 18/721b lim: 50 exec/s: 20 rss: 70Mb L: 16/50 MS: 2 InsertByte-CrossOver- 00:08:10.307 [2024-07-14 21:09:07.103695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.307 [2024-07-14 21:09:07.103724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.103792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.307 [2024-07-14 21:09:07.103813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.307 #21 NEW cov: 12123 ft: 14977 corp: 19/746b lim: 50 exec/s: 21 rss: 70Mb L: 25/50 MS: 1 EraseBytes- 00:08:10.307 [2024-07-14 21:09:07.154111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.307 [2024-07-14 21:09:07.154140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.154202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.307 [2024-07-14 21:09:07.154222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.154287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.307 [2024-07-14 21:09:07.154310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.154375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.307 [2024-07-14 21:09:07.154393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.307 #22 NEW cov: 12123 ft: 14981 corp: 20/795b lim: 50 exec/s: 22 rss: 70Mb L: 49/50 MS: 1 CopyPart- 00:08:10.307 [2024-07-14 21:09:07.194290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.307 [2024-07-14 21:09:07.194319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.194388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.307 [2024-07-14 21:09:07.194411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.194482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.307 [2024-07-14 21:09:07.194505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.307 [2024-07-14 21:09:07.194574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.307 [2024-07-14 21:09:07.194594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.630 #23 NEW cov: 12123 ft: 14989 corp: 21/842b lim: 50 exec/s: 23 rss: 70Mb L: 47/50 MS: 1 ChangeByte- 00:08:10.630 [2024-07-14 21:09:07.234209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.630 [2024-07-14 21:09:07.234239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.630 [2024-07-14 21:09:07.234307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.630 [2024-07-14 21:09:07.234329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.234396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.631 [2024-07-14 21:09:07.234420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.631 #24 NEW cov: 12123 ft: 15040 corp: 22/881b lim: 50 exec/s: 24 rss: 70Mb L: 39/50 MS: 1 InsertByte- 00:08:10.631 [2024-07-14 21:09:07.274524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.631 [2024-07-14 21:09:07.274553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.274618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.631 [2024-07-14 21:09:07.274638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.274705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.631 [2024-07-14 21:09:07.274726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.274796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.631 [2024-07-14 21:09:07.274817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.631 #25 NEW cov: 12123 ft: 15045 corp: 23/929b lim: 50 exec/s: 25 rss: 70Mb L: 48/50 MS: 1 InsertByte- 00:08:10.631 [2024-07-14 21:09:07.314573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.631 [2024-07-14 21:09:07.314601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.314660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.631 [2024-07-14 21:09:07.314682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.314747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.631 [2024-07-14 21:09:07.314766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.314833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.631 [2024-07-14 21:09:07.314852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.631 #26 NEW cov: 12123 ft: 15048 corp: 24/976b lim: 50 exec/s: 26 rss: 70Mb L: 47/50 MS: 1 ChangeByte- 00:08:10.631 [2024-07-14 21:09:07.354692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.631 [2024-07-14 21:09:07.354721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.354785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.631 [2024-07-14 21:09:07.354809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.354877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.631 [2024-07-14 21:09:07.354896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.354963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.631 [2024-07-14 21:09:07.354981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.631 #27 NEW cov: 12123 ft: 15058 corp: 25/1024b lim: 50 exec/s: 27 rss: 70Mb L: 48/50 MS: 1 ChangeBit- 00:08:10.631 [2024-07-14 21:09:07.405025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.631 [2024-07-14 21:09:07.405054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.405119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.631 [2024-07-14 21:09:07.405141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.405209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.631 [2024-07-14 21:09:07.405230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.405295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.631 [2024-07-14 21:09:07.405314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.405381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:10.631 [2024-07-14 21:09:07.405400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:10.631 #28 NEW cov: 12123 ft: 15084 corp: 26/1074b lim: 50 exec/s: 28 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:10.631 [2024-07-14 21:09:07.454988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.631 [2024-07-14 21:09:07.455017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.455077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.631 [2024-07-14 21:09:07.455097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.455163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.631 [2024-07-14 21:09:07.455183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.455252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.631 [2024-07-14 21:09:07.455270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.631 #29 NEW cov: 12123 ft: 15125 corp: 27/1117b lim: 50 exec/s: 29 rss: 70Mb L: 43/50 MS: 1 CopyPart- 00:08:10.631 [2024-07-14 21:09:07.505153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.631 [2024-07-14 21:09:07.505183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.505245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.631 [2024-07-14 21:09:07.505271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.505337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.631 [2024-07-14 21:09:07.505357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.631 [2024-07-14 21:09:07.505423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.631 [2024-07-14 21:09:07.505448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.631 #30 NEW cov: 12123 ft: 15131 corp: 28/1164b lim: 50 exec/s: 30 rss: 70Mb L: 47/50 MS: 1 CrossOver- 00:08:10.891 [2024-07-14 21:09:07.545253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.891 [2024-07-14 21:09:07.545282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.545347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.891 [2024-07-14 21:09:07.545369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.545434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.891 [2024-07-14 21:09:07.545460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.545554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.891 [2024-07-14 21:09:07.545573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.891 #31 NEW cov: 12123 ft: 15139 corp: 29/1211b lim: 50 exec/s: 31 rss: 70Mb L: 47/50 MS: 1 ChangeByte- 00:08:10.891 [2024-07-14 21:09:07.585354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.891 [2024-07-14 21:09:07.585382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.585447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.891 [2024-07-14 21:09:07.585468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.585536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.891 [2024-07-14 21:09:07.585555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.585618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.891 [2024-07-14 21:09:07.585637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.891 #32 NEW cov: 12123 ft: 15161 corp: 30/1259b lim: 50 exec/s: 32 rss: 70Mb L: 48/50 MS: 1 ChangeByte- 00:08:10.891 [2024-07-14 21:09:07.635526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.891 [2024-07-14 21:09:07.635554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.635615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.891 [2024-07-14 21:09:07.635637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.635703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.891 [2024-07-14 21:09:07.635725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.635794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.891 [2024-07-14 21:09:07.635813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.891 #33 NEW cov: 12123 ft: 15170 corp: 31/1302b lim: 50 exec/s: 33 rss: 70Mb L: 43/50 MS: 1 EraseBytes- 00:08:10.891 [2024-07-14 21:09:07.675640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.891 [2024-07-14 21:09:07.675669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.675734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.891 [2024-07-14 21:09:07.675756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.675825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.891 [2024-07-14 21:09:07.675846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.675912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.891 [2024-07-14 21:09:07.675931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.891 #34 NEW cov: 12123 ft: 15204 corp: 32/1348b lim: 50 exec/s: 34 rss: 70Mb L: 46/50 MS: 1 InsertRepeatedBytes- 00:08:10.891 [2024-07-14 21:09:07.725780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.891 [2024-07-14 21:09:07.725808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.725868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.891 [2024-07-14 21:09:07.725888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.725956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.891 [2024-07-14 21:09:07.725974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.726040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.891 [2024-07-14 21:09:07.726058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.891 #35 NEW cov: 12123 ft: 15205 corp: 33/1396b lim: 50 exec/s: 35 rss: 71Mb L: 48/50 MS: 1 ChangeByte- 00:08:10.891 [2024-07-14 21:09:07.765881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:10.891 [2024-07-14 21:09:07.765910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.765974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:10.891 [2024-07-14 21:09:07.765996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.766067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:10.891 [2024-07-14 21:09:07.766091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.891 [2024-07-14 21:09:07.766162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:10.891 [2024-07-14 21:09:07.766184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.151 #36 NEW cov: 12123 ft: 15230 corp: 34/1445b lim: 50 exec/s: 36 rss: 71Mb L: 49/50 MS: 1 CopyPart- 00:08:11.151 [2024-07-14 21:09:07.816210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.151 [2024-07-14 21:09:07.816237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.816299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.151 [2024-07-14 21:09:07.816321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.816386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.151 [2024-07-14 21:09:07.816405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.816474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.151 [2024-07-14 21:09:07.816494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.816561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:11.151 [2024-07-14 21:09:07.816580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:11.151 #37 NEW cov: 12123 ft: 15239 corp: 35/1495b lim: 50 exec/s: 37 rss: 71Mb L: 50/50 MS: 1 CopyPart- 00:08:11.151 [2024-07-14 21:09:07.855978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.151 [2024-07-14 21:09:07.856007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.856073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.151 [2024-07-14 21:09:07.856094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.856160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.151 [2024-07-14 21:09:07.856179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.151 #38 NEW cov: 12123 ft: 15319 corp: 36/1533b lim: 50 exec/s: 38 rss: 71Mb L: 38/50 MS: 1 ChangeByte- 00:08:11.151 [2024-07-14 21:09:07.896445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.151 [2024-07-14 21:09:07.896474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.896535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.151 [2024-07-14 21:09:07.896555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.896619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.151 [2024-07-14 21:09:07.896637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.896701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.151 [2024-07-14 21:09:07.896719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.896787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:11.151 [2024-07-14 21:09:07.896806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:11.151 #39 NEW cov: 12123 ft: 15365 corp: 37/1583b lim: 50 exec/s: 39 rss: 71Mb L: 50/50 MS: 1 ChangeByte- 00:08:11.151 [2024-07-14 21:09:07.945988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.151 [2024-07-14 21:09:07.946016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.151 #42 NEW cov: 12123 ft: 15425 corp: 38/1593b lim: 50 exec/s: 42 rss: 71Mb L: 10/50 MS: 3 CMP-ChangeBinInt-InsertByte- DE: "\001\000\000\000\000\000\000\000"- 00:08:11.151 [2024-07-14 21:09:07.986523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.151 [2024-07-14 21:09:07.986551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.986610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:11.151 [2024-07-14 21:09:07.986632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.986699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:11.151 [2024-07-14 21:09:07.986718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.151 [2024-07-14 21:09:07.986783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:11.151 [2024-07-14 21:09:07.986803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.151 #43 NEW cov: 12123 ft: 15434 corp: 39/1636b lim: 50 exec/s: 43 rss: 71Mb L: 43/50 MS: 1 EraseBytes- 00:08:11.151 [2024-07-14 21:09:08.036210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:11.151 [2024-07-14 21:09:08.036237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.410 #44 NEW cov: 12123 ft: 15445 corp: 40/1648b lim: 50 exec/s: 22 rss: 71Mb L: 12/50 MS: 1 CrossOver- 00:08:11.410 #44 DONE cov: 12123 ft: 15445 corp: 40/1648b lim: 50 exec/s: 22 rss: 71Mb 00:08:11.410 ###### Recommended dictionary. ###### 00:08:11.410 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:11.410 ###### End of recommended dictionary. ###### 00:08:11.410 Done 44 runs in 2 second(s) 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:11.410 21:09:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:11.410 [2024-07-14 21:09:08.219605] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:11.410 [2024-07-14 21:09:08.219673] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027441 ] 00:08:11.410 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.669 [2024-07-14 21:09:08.393018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.669 [2024-07-14 21:09:08.414380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.669 [2024-07-14 21:09:08.466683] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.669 [2024-07-14 21:09:08.483002] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:11.669 INFO: Running with entropic power schedule (0xFF, 100). 00:08:11.669 INFO: Seed: 60093287 00:08:11.669 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:11.669 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:11.669 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:11.669 INFO: A corpus is not provided, starting from an empty corpus 00:08:11.669 #2 INITED exec/s: 0 rss: 63Mb 00:08:11.669 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:11.669 This may also happen if the target rejected all inputs we tried so far 00:08:11.669 [2024-07-14 21:09:08.538151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:11.669 [2024-07-14 21:09:08.538183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 NEW_FUNC[1/692]: 0x4bb140 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:12.186 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.186 #12 NEW cov: 11903 ft: 11904 corp: 2/20b lim: 85 exec/s: 0 rss: 69Mb L: 19/19 MS: 5 CopyPart-ChangeByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:12.186 [2024-07-14 21:09:08.848855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-14 21:09:08.848891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 NEW_FUNC[1/1]: 0x102ba70 in _sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1323 00:08:12.186 #13 NEW cov: 12035 ft: 12416 corp: 3/50b lim: 85 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:08:12.186 [2024-07-14 21:09:08.898949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-14 21:09:08.898980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 #14 NEW cov: 12041 ft: 12773 corp: 4/80b lim: 85 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 ChangeByte- 00:08:12.186 [2024-07-14 21:09:08.949054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-14 21:09:08.949084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 #15 NEW cov: 12126 ft: 13088 corp: 5/111b lim: 85 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 InsertByte- 00:08:12.186 [2024-07-14 21:09:08.989176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-14 21:09:08.989206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 #21 NEW cov: 12126 ft: 13216 corp: 6/141b lim: 85 exec/s: 0 rss: 70Mb L: 30/31 MS: 1 CMP- DE: "\377\001\000\000"- 00:08:12.186 [2024-07-14 21:09:09.039313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.186 [2024-07-14 21:09:09.039343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.186 #22 NEW cov: 12126 ft: 13330 corp: 7/172b lim: 85 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 CMP- DE: "\007\000"- 00:08:12.444 [2024-07-14 21:09:09.089481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.444 [2024-07-14 21:09:09.089511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.444 #23 NEW cov: 12126 ft: 13382 corp: 8/203b lim: 85 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 ChangeBit- 00:08:12.444 [2024-07-14 21:09:09.139597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.444 [2024-07-14 21:09:09.139626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.444 #24 NEW cov: 12126 ft: 13455 corp: 9/233b lim: 85 exec/s: 0 rss: 70Mb L: 30/31 MS: 1 ChangeBinInt- 00:08:12.444 [2024-07-14 21:09:09.179711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-14 21:09:09.179740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 #25 NEW cov: 12126 ft: 13487 corp: 10/260b lim: 85 exec/s: 0 rss: 70Mb L: 27/31 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:12.445 [2024-07-14 21:09:09.219832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-14 21:09:09.219861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 #31 NEW cov: 12126 ft: 13540 corp: 11/290b lim: 85 exec/s: 0 rss: 70Mb L: 30/31 MS: 1 ShuffleBytes- 00:08:12.445 [2024-07-14 21:09:09.270248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-14 21:09:09.270276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 [2024-07-14 21:09:09.270344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.445 [2024-07-14 21:09:09.270366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.445 [2024-07-14 21:09:09.270431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.445 [2024-07-14 21:09:09.270453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.445 #32 NEW cov: 12126 ft: 14407 corp: 12/347b lim: 85 exec/s: 0 rss: 70Mb L: 57/57 MS: 1 CrossOver- 00:08:12.445 [2024-07-14 21:09:09.310398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.445 [2024-07-14 21:09:09.310425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.445 [2024-07-14 21:09:09.310500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.445 [2024-07-14 21:09:09.310522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.445 [2024-07-14 21:09:09.310589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.445 [2024-07-14 21:09:09.310612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.445 #33 NEW cov: 12126 ft: 14432 corp: 13/404b lim: 85 exec/s: 0 rss: 70Mb L: 57/57 MS: 1 ShuffleBytes- 00:08:12.703 [2024-07-14 21:09:09.360203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-14 21:09:09.360231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 #34 NEW cov: 12126 ft: 14481 corp: 14/436b lim: 85 exec/s: 0 rss: 70Mb L: 32/57 MS: 1 PersAutoDict- DE: "\007\000"- 00:08:12.704 [2024-07-14 21:09:09.400438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-14 21:09:09.400486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-14 21:09:09.400556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-14 21:09:09.400578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:12.704 #35 NEW cov: 12149 ft: 14799 corp: 15/485b lim: 85 exec/s: 0 rss: 70Mb L: 49/57 MS: 1 CopyPart- 00:08:12.704 [2024-07-14 21:09:09.450521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-14 21:09:09.450562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 #36 NEW cov: 12149 ft: 14843 corp: 16/517b lim: 85 exec/s: 0 rss: 70Mb L: 32/57 MS: 1 InsertByte- 00:08:12.704 [2024-07-14 21:09:09.500885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-14 21:09:09.500914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-14 21:09:09.500979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-14 21:09:09.501000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 [2024-07-14 21:09:09.501065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.704 [2024-07-14 21:09:09.501084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.704 #37 NEW cov: 12149 ft: 14859 corp: 17/573b lim: 85 exec/s: 37 rss: 70Mb L: 56/57 MS: 1 InsertRepeatedBytes- 00:08:12.704 [2024-07-14 21:09:09.541010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-14 21:09:09.541038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 [2024-07-14 21:09:09.541103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.704 [2024-07-14 21:09:09.541123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.704 [2024-07-14 21:09:09.541188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.704 [2024-07-14 21:09:09.541208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.704 #38 NEW cov: 12149 ft: 14905 corp: 18/639b lim: 85 exec/s: 38 rss: 70Mb L: 66/66 MS: 1 CopyPart- 00:08:12.704 [2024-07-14 21:09:09.580833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.704 [2024-07-14 21:09:09.580861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.704 #39 NEW cov: 12149 ft: 14948 corp: 19/670b lim: 85 exec/s: 39 rss: 70Mb L: 31/66 MS: 1 ChangeByte- 00:08:12.962 [2024-07-14 21:09:09.620947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.962 [2024-07-14 21:09:09.620975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.963 #41 NEW cov: 12149 ft: 14961 corp: 20/697b lim: 85 exec/s: 41 rss: 70Mb L: 27/66 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:12.963 [2024-07-14 21:09:09.661091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.963 [2024-07-14 21:09:09.661119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.963 #42 NEW cov: 12149 ft: 14971 corp: 21/717b lim: 85 exec/s: 42 rss: 70Mb L: 20/66 MS: 1 EraseBytes- 00:08:12.963 [2024-07-14 21:09:09.711348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.963 [2024-07-14 21:09:09.711377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.963 [2024-07-14 21:09:09.711453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.963 [2024-07-14 21:09:09.711475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.963 #43 NEW cov: 12149 ft: 14988 corp: 22/754b lim: 85 exec/s: 43 rss: 70Mb L: 37/66 MS: 1 InsertRepeatedBytes- 00:08:12.963 [2024-07-14 21:09:09.751324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.963 [2024-07-14 21:09:09.751352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.963 #44 NEW cov: 12149 ft: 15025 corp: 23/781b lim: 85 exec/s: 44 rss: 70Mb L: 27/66 MS: 1 ChangeBinInt- 00:08:12.963 [2024-07-14 21:09:09.791770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.963 [2024-07-14 21:09:09.791798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.963 [2024-07-14 21:09:09.791865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.963 [2024-07-14 21:09:09.791890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.963 [2024-07-14 21:09:09.791956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.963 [2024-07-14 21:09:09.791974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.963 #45 NEW cov: 12149 ft: 15042 corp: 24/837b lim: 85 exec/s: 45 rss: 70Mb L: 56/66 MS: 1 ChangeByte- 00:08:12.963 [2024-07-14 21:09:09.842036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:12.963 [2024-07-14 21:09:09.842064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.963 [2024-07-14 21:09:09.842128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:12.963 [2024-07-14 21:09:09.842150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.963 [2024-07-14 21:09:09.842217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:12.963 [2024-07-14 21:09:09.842236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.963 [2024-07-14 21:09:09.842303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:12.963 [2024-07-14 21:09:09.842322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.221 #46 NEW cov: 12149 ft: 15401 corp: 25/911b lim: 85 exec/s: 46 rss: 70Mb L: 74/74 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:13.221 [2024-07-14 21:09:09.891872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.221 [2024-07-14 21:09:09.891900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.221 [2024-07-14 21:09:09.891970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.221 [2024-07-14 21:09:09.891993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.221 #48 NEW cov: 12149 ft: 15425 corp: 26/956b lim: 85 exec/s: 48 rss: 70Mb L: 45/74 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:13.221 [2024-07-14 21:09:09.931864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.221 [2024-07-14 21:09:09.931893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.221 #49 NEW cov: 12149 ft: 15426 corp: 27/987b lim: 85 exec/s: 49 rss: 71Mb L: 31/74 MS: 1 InsertByte- 00:08:13.221 [2024-07-14 21:09:09.972265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.221 [2024-07-14 21:09:09.972294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.221 [2024-07-14 21:09:09.972360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.222 [2024-07-14 21:09:09.972382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.222 [2024-07-14 21:09:09.972453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.222 [2024-07-14 21:09:09.972477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.222 #50 NEW cov: 12149 ft: 15468 corp: 28/1054b lim: 85 exec/s: 50 rss: 71Mb L: 67/74 MS: 1 CopyPart- 00:08:13.222 [2024-07-14 21:09:10.022107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.222 [2024-07-14 21:09:10.022135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.222 #51 NEW cov: 12149 ft: 15532 corp: 29/1086b lim: 85 exec/s: 51 rss: 71Mb L: 32/74 MS: 1 InsertByte- 00:08:13.222 [2024-07-14 21:09:10.062275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.222 [2024-07-14 21:09:10.062303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.222 #52 NEW cov: 12149 ft: 15540 corp: 30/1113b lim: 85 exec/s: 52 rss: 71Mb L: 27/74 MS: 1 ShuffleBytes- 00:08:13.222 [2024-07-14 21:09:10.112385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.222 [2024-07-14 21:09:10.112411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.480 #53 NEW cov: 12149 ft: 15567 corp: 31/1143b lim: 85 exec/s: 53 rss: 71Mb L: 30/74 MS: 1 ShuffleBytes- 00:08:13.480 [2024-07-14 21:09:10.152474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.480 [2024-07-14 21:09:10.152505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.480 #54 NEW cov: 12149 ft: 15574 corp: 32/1174b lim: 85 exec/s: 54 rss: 71Mb L: 31/74 MS: 1 ChangeBinInt- 00:08:13.480 [2024-07-14 21:09:10.193081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.480 [2024-07-14 21:09:10.193107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.480 [2024-07-14 21:09:10.193171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.480 [2024-07-14 21:09:10.193188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.480 [2024-07-14 21:09:10.193244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.480 [2024-07-14 21:09:10.193260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.480 [2024-07-14 21:09:10.193315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.480 [2024-07-14 21:09:10.193333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.480 #55 NEW cov: 12149 ft: 15581 corp: 33/1249b lim: 85 exec/s: 55 rss: 71Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:08:13.480 [2024-07-14 21:09:10.242994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.480 [2024-07-14 21:09:10.243020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.480 [2024-07-14 21:09:10.243056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.480 [2024-07-14 21:09:10.243072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.480 [2024-07-14 21:09:10.243129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.480 [2024-07-14 21:09:10.243144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.480 #56 NEW cov: 12149 ft: 15587 corp: 34/1307b lim: 85 exec/s: 56 rss: 71Mb L: 58/75 MS: 1 InsertByte- 00:08:13.480 [2024-07-14 21:09:10.293152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.480 [2024-07-14 21:09:10.293180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.480 [2024-07-14 21:09:10.293224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.480 [2024-07-14 21:09:10.293241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.480 [2024-07-14 21:09:10.293296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.480 [2024-07-14 21:09:10.293313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.480 #57 NEW cov: 12149 ft: 15627 corp: 35/1369b lim: 85 exec/s: 57 rss: 71Mb L: 62/75 MS: 1 CrossOver- 00:08:13.480 [2024-07-14 21:09:10.342991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.480 [2024-07-14 21:09:10.343018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.480 #60 NEW cov: 12149 ft: 15632 corp: 36/1387b lim: 85 exec/s: 60 rss: 71Mb L: 18/75 MS: 3 EraseBytes-ShuffleBytes-PersAutoDict- DE: "\007\000"- 00:08:13.740 [2024-07-14 21:09:10.383151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.740 [2024-07-14 21:09:10.383179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.740 #61 NEW cov: 12149 ft: 15642 corp: 37/1412b lim: 85 exec/s: 61 rss: 71Mb L: 25/75 MS: 1 EraseBytes- 00:08:13.740 [2024-07-14 21:09:10.433367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.740 [2024-07-14 21:09:10.433394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.740 [2024-07-14 21:09:10.433455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.740 [2024-07-14 21:09:10.433472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.740 #62 NEW cov: 12149 ft: 15662 corp: 38/1447b lim: 85 exec/s: 62 rss: 71Mb L: 35/75 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:08:13.740 [2024-07-14 21:09:10.473705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.740 [2024-07-14 21:09:10.473733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.740 [2024-07-14 21:09:10.473771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.740 [2024-07-14 21:09:10.473805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.740 [2024-07-14 21:09:10.473860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.740 [2024-07-14 21:09:10.473876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.740 #63 NEW cov: 12149 ft: 15685 corp: 39/1503b lim: 85 exec/s: 63 rss: 71Mb L: 56/75 MS: 1 ShuffleBytes- 00:08:13.740 [2024-07-14 21:09:10.513928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:13.740 [2024-07-14 21:09:10.513954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.740 [2024-07-14 21:09:10.513996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:13.740 [2024-07-14 21:09:10.514013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.740 [2024-07-14 21:09:10.514067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:13.740 [2024-07-14 21:09:10.514084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.740 [2024-07-14 21:09:10.514138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:13.740 [2024-07-14 21:09:10.514154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.740 #64 pulse cov: 12149 ft: 15712 corp: 39/1503b lim: 85 exec/s: 32 rss: 72Mb 00:08:13.740 #64 NEW cov: 12149 ft: 15712 corp: 40/1581b lim: 85 exec/s: 32 rss: 72Mb L: 78/78 MS: 1 CrossOver- 00:08:13.740 #64 DONE cov: 12149 ft: 15712 corp: 40/1581b lim: 85 exec/s: 32 rss: 72Mb 00:08:13.740 ###### Recommended dictionary. ###### 00:08:13.740 "\377\001\000\000" # Uses: 1 00:08:13.740 "\007\000" # Uses: 3 00:08:13.740 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:13.740 ###### End of recommended dictionary. ###### 00:08:13.740 Done 64 runs in 2 second(s) 00:08:13.999 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:13.999 21:09:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:13.999 21:09:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.999 21:09:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:13.999 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:13.999 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:13.999 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.000 21:09:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:14.000 [2024-07-14 21:09:10.701825] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:14.000 [2024-07-14 21:09:10.701914] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027910 ] 00:08:14.000 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.000 [2024-07-14 21:09:10.878007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.000 [2024-07-14 21:09:10.899997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.259 [2024-07-14 21:09:10.952236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.259 [2024-07-14 21:09:10.968576] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:14.259 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.259 INFO: Seed: 2545108269 00:08:14.259 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:14.259 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:14.259 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:14.259 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.259 #2 INITED exec/s: 0 rss: 63Mb 00:08:14.259 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.259 This may also happen if the target rejected all inputs we tried so far 00:08:14.259 [2024-07-14 21:09:11.013571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.259 [2024-07-14 21:09:11.013601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.518 NEW_FUNC[1/692]: 0x4be370 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:14.518 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:14.518 #5 NEW cov: 11838 ft: 11832 corp: 2/8b lim: 25 exec/s: 0 rss: 69Mb L: 7/7 MS: 3 ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:08:14.518 [2024-07-14 21:09:11.344379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.518 [2024-07-14 21:09:11.344414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.518 #19 NEW cov: 11968 ft: 12550 corp: 3/16b lim: 25 exec/s: 0 rss: 69Mb L: 8/8 MS: 4 CrossOver-CopyPart-CopyPart-CrossOver- 00:08:14.518 [2024-07-14 21:09:11.384655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.518 [2024-07-14 21:09:11.384681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.518 [2024-07-14 21:09:11.384727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.518 [2024-07-14 21:09:11.384742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.518 [2024-07-14 21:09:11.384795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:14.518 [2024-07-14 21:09:11.384811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.518 #20 NEW cov: 11974 ft: 13221 corp: 4/32b lim: 25 exec/s: 0 rss: 69Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:14.777 [2024-07-14 21:09:11.434555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.777 [2024-07-14 21:09:11.434584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.777 #21 NEW cov: 12059 ft: 13471 corp: 5/40b lim: 25 exec/s: 0 rss: 69Mb L: 8/16 MS: 1 ChangeByte- 00:08:14.777 [2024-07-14 21:09:11.484941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.777 [2024-07-14 21:09:11.484967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.777 [2024-07-14 21:09:11.485004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.777 [2024-07-14 21:09:11.485020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.777 [2024-07-14 21:09:11.485074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:14.777 [2024-07-14 21:09:11.485090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.777 #22 NEW cov: 12059 ft: 13535 corp: 6/56b lim: 25 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 ChangeBit- 00:08:14.777 [2024-07-14 21:09:11.534849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.777 [2024-07-14 21:09:11.534876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.777 #23 NEW cov: 12059 ft: 13609 corp: 7/64b lim: 25 exec/s: 0 rss: 70Mb L: 8/16 MS: 1 ChangeBinInt- 00:08:14.777 [2024-07-14 21:09:11.575067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.777 [2024-07-14 21:09:11.575094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.777 [2024-07-14 21:09:11.575147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.777 [2024-07-14 21:09:11.575164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.777 #24 NEW cov: 12059 ft: 13874 corp: 8/75b lim: 25 exec/s: 0 rss: 70Mb L: 11/16 MS: 1 CopyPart- 00:08:14.777 [2024-07-14 21:09:11.625305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.777 [2024-07-14 21:09:11.625331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.777 [2024-07-14 21:09:11.625372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:14.777 [2024-07-14 21:09:11.625386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.777 [2024-07-14 21:09:11.625460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:14.777 [2024-07-14 21:09:11.625476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.777 #25 NEW cov: 12059 ft: 13904 corp: 9/91b lim: 25 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 ChangeByte- 00:08:14.777 [2024-07-14 21:09:11.665185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:14.777 [2024-07-14 21:09:11.665211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.036 #26 NEW cov: 12059 ft: 13929 corp: 10/99b lim: 25 exec/s: 0 rss: 70Mb L: 8/16 MS: 1 ShuffleBytes- 00:08:15.036 [2024-07-14 21:09:11.705634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.036 [2024-07-14 21:09:11.705660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.036 [2024-07-14 21:09:11.705711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.036 [2024-07-14 21:09:11.705725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.036 [2024-07-14 21:09:11.705777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.036 [2024-07-14 21:09:11.705792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.036 [2024-07-14 21:09:11.705844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.036 [2024-07-14 21:09:11.705859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.036 #27 NEW cov: 12059 ft: 14367 corp: 11/120b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 CrossOver- 00:08:15.036 [2024-07-14 21:09:11.745619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.036 [2024-07-14 21:09:11.745644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.036 [2024-07-14 21:09:11.745688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.036 [2024-07-14 21:09:11.745703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.036 [2024-07-14 21:09:11.745754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.036 [2024-07-14 21:09:11.745771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.036 #28 NEW cov: 12059 ft: 14432 corp: 12/136b lim: 25 exec/s: 0 rss: 70Mb L: 16/21 MS: 1 ChangeByte- 00:08:15.036 [2024-07-14 21:09:11.795776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.036 [2024-07-14 21:09:11.795802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.036 [2024-07-14 21:09:11.795841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.036 [2024-07-14 21:09:11.795858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.036 [2024-07-14 21:09:11.795913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.037 [2024-07-14 21:09:11.795929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.037 #29 NEW cov: 12059 ft: 14491 corp: 13/152b lim: 25 exec/s: 0 rss: 70Mb L: 16/21 MS: 1 ChangeBinInt- 00:08:15.037 [2024-07-14 21:09:11.845906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.037 [2024-07-14 21:09:11.845933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.845970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.037 [2024-07-14 21:09:11.845985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.846038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.037 [2024-07-14 21:09:11.846054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.037 #30 NEW cov: 12059 ft: 14500 corp: 14/168b lim: 25 exec/s: 0 rss: 70Mb L: 16/21 MS: 1 ChangeByte- 00:08:15.037 [2024-07-14 21:09:11.896178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.037 [2024-07-14 21:09:11.896204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.896250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.037 [2024-07-14 21:09:11.896264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.896319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.037 [2024-07-14 21:09:11.896335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.896390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.037 [2024-07-14 21:09:11.896407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.037 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:15.037 #31 NEW cov: 12082 ft: 14542 corp: 15/191b lim: 25 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:08:15.037 [2024-07-14 21:09:11.936265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.037 [2024-07-14 21:09:11.936292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.936332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.037 [2024-07-14 21:09:11.936348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.936401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.037 [2024-07-14 21:09:11.936417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.037 [2024-07-14 21:09:11.936473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.037 [2024-07-14 21:09:11.936489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.296 #32 NEW cov: 12082 ft: 14560 corp: 16/211b lim: 25 exec/s: 0 rss: 70Mb L: 20/23 MS: 1 CMP- DE: "\000\000\001\000"- 00:08:15.296 [2024-07-14 21:09:11.986353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.296 [2024-07-14 21:09:11.986379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.296 [2024-07-14 21:09:11.986428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.296 [2024-07-14 21:09:11.986451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.296 [2024-07-14 21:09:11.986506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.296 [2024-07-14 21:09:11.986522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.296 #33 NEW cov: 12082 ft: 14600 corp: 17/226b lim: 25 exec/s: 33 rss: 70Mb L: 15/23 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:08:15.296 [2024-07-14 21:09:12.036216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.296 [2024-07-14 21:09:12.036242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.296 #34 NEW cov: 12082 ft: 14624 corp: 18/234b lim: 25 exec/s: 34 rss: 70Mb L: 8/23 MS: 1 ChangeBit- 00:08:15.296 [2024-07-14 21:09:12.076664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.296 [2024-07-14 21:09:12.076691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.296 [2024-07-14 21:09:12.076736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.296 [2024-07-14 21:09:12.076751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.296 [2024-07-14 21:09:12.076803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.296 [2024-07-14 21:09:12.076817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.296 [2024-07-14 21:09:12.076869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.296 [2024-07-14 21:09:12.076885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.296 #35 NEW cov: 12082 ft: 14633 corp: 19/254b lim: 25 exec/s: 35 rss: 70Mb L: 20/23 MS: 1 CrossOver- 00:08:15.296 [2024-07-14 21:09:12.116680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.296 [2024-07-14 21:09:12.116707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.296 [2024-07-14 21:09:12.116744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.296 [2024-07-14 21:09:12.116762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.296 [2024-07-14 21:09:12.116818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.296 [2024-07-14 21:09:12.116834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.296 #36 NEW cov: 12082 ft: 14654 corp: 20/270b lim: 25 exec/s: 36 rss: 70Mb L: 16/23 MS: 1 InsertByte- 00:08:15.297 [2024-07-14 21:09:12.166896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.297 [2024-07-14 21:09:12.166923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.297 [2024-07-14 21:09:12.166973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.297 [2024-07-14 21:09:12.166988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.297 [2024-07-14 21:09:12.167044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.297 [2024-07-14 21:09:12.167077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.297 [2024-07-14 21:09:12.167132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.297 [2024-07-14 21:09:12.167149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.555 #37 NEW cov: 12082 ft: 14655 corp: 21/290b lim: 25 exec/s: 37 rss: 70Mb L: 20/23 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:08:15.555 [2024-07-14 21:09:12.216732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.555 [2024-07-14 21:09:12.216760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.555 #38 NEW cov: 12082 ft: 14690 corp: 22/298b lim: 25 exec/s: 38 rss: 70Mb L: 8/23 MS: 1 ChangeByte- 00:08:15.555 [2024-07-14 21:09:12.267002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.555 [2024-07-14 21:09:12.267029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.555 [2024-07-14 21:09:12.267084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.555 [2024-07-14 21:09:12.267099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.555 #39 NEW cov: 12082 ft: 14696 corp: 23/310b lim: 25 exec/s: 39 rss: 70Mb L: 12/23 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:08:15.555 [2024-07-14 21:09:12.317026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.555 [2024-07-14 21:09:12.317054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.555 #40 NEW cov: 12082 ft: 14730 corp: 24/317b lim: 25 exec/s: 40 rss: 70Mb L: 7/23 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:08:15.555 [2024-07-14 21:09:12.357514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.555 [2024-07-14 21:09:12.357540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.556 [2024-07-14 21:09:12.357587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.556 [2024-07-14 21:09:12.357603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.556 [2024-07-14 21:09:12.357657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.556 [2024-07-14 21:09:12.357674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.556 [2024-07-14 21:09:12.357729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.556 [2024-07-14 21:09:12.357745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.556 #41 NEW cov: 12082 ft: 14741 corp: 25/341b lim: 25 exec/s: 41 rss: 70Mb L: 24/24 MS: 1 CopyPart- 00:08:15.556 [2024-07-14 21:09:12.407272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.556 [2024-07-14 21:09:12.407299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.556 #47 NEW cov: 12082 ft: 14746 corp: 26/349b lim: 25 exec/s: 47 rss: 70Mb L: 8/24 MS: 1 ShuffleBytes- 00:08:15.556 [2024-07-14 21:09:12.447351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.556 [2024-07-14 21:09:12.447377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.815 #48 NEW cov: 12082 ft: 14774 corp: 27/354b lim: 25 exec/s: 48 rss: 70Mb L: 5/24 MS: 1 EraseBytes- 00:08:15.815 [2024-07-14 21:09:12.487572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.815 [2024-07-14 21:09:12.487597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.487637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.815 [2024-07-14 21:09:12.487651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.815 #49 NEW cov: 12082 ft: 14803 corp: 28/365b lim: 25 exec/s: 49 rss: 70Mb L: 11/24 MS: 1 CopyPart- 00:08:15.815 [2024-07-14 21:09:12.528015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.815 [2024-07-14 21:09:12.528057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.528111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.815 [2024-07-14 21:09:12.528126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.528181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.815 [2024-07-14 21:09:12.528196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.528249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.815 [2024-07-14 21:09:12.528265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.528318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:15.815 [2024-07-14 21:09:12.528333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:15.815 #50 NEW cov: 12082 ft: 14903 corp: 29/390b lim: 25 exec/s: 50 rss: 70Mb L: 25/25 MS: 1 InsertByte- 00:08:15.815 [2024-07-14 21:09:12.578101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.815 [2024-07-14 21:09:12.578128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.578176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.815 [2024-07-14 21:09:12.578192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.578245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.815 [2024-07-14 21:09:12.578260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.578312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.815 [2024-07-14 21:09:12.578327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.815 #51 NEW cov: 12082 ft: 14917 corp: 30/414b lim: 25 exec/s: 51 rss: 71Mb L: 24/25 MS: 1 ShuffleBytes- 00:08:15.815 [2024-07-14 21:09:12.617827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.815 [2024-07-14 21:09:12.617853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.815 #52 NEW cov: 12082 ft: 14919 corp: 31/422b lim: 25 exec/s: 52 rss: 71Mb L: 8/25 MS: 1 ChangeBit- 00:08:15.815 [2024-07-14 21:09:12.658349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.815 [2024-07-14 21:09:12.658375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.658424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:15.815 [2024-07-14 21:09:12.658440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.658523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:15.815 [2024-07-14 21:09:12.658539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.815 [2024-07-14 21:09:12.658592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:15.815 [2024-07-14 21:09:12.658608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.815 #58 NEW cov: 12082 ft: 14922 corp: 32/442b lim: 25 exec/s: 58 rss: 71Mb L: 20/25 MS: 1 InsertRepeatedBytes- 00:08:15.815 [2024-07-14 21:09:12.708235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:15.815 [2024-07-14 21:09:12.708261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 #59 NEW cov: 12082 ft: 14956 corp: 33/447b lim: 25 exec/s: 59 rss: 71Mb L: 5/25 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:08:16.075 [2024-07-14 21:09:12.748448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-07-14 21:09:12.748474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 [2024-07-14 21:09:12.748513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.075 [2024-07-14 21:09:12.748531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.075 #60 NEW cov: 12082 ft: 14965 corp: 34/459b lim: 25 exec/s: 60 rss: 71Mb L: 12/25 MS: 1 EraseBytes- 00:08:16.075 [2024-07-14 21:09:12.788559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-07-14 21:09:12.788585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 [2024-07-14 21:09:12.788637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.075 [2024-07-14 21:09:12.788653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.075 #61 NEW cov: 12082 ft: 14987 corp: 35/473b lim: 25 exec/s: 61 rss: 71Mb L: 14/25 MS: 1 CopyPart- 00:08:16.075 [2024-07-14 21:09:12.828659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-07-14 21:09:12.828685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 [2024-07-14 21:09:12.828725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.075 [2024-07-14 21:09:12.828742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.075 #62 NEW cov: 12082 ft: 15002 corp: 36/485b lim: 25 exec/s: 62 rss: 71Mb L: 12/25 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:08:16.075 [2024-07-14 21:09:12.878931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-07-14 21:09:12.878957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 [2024-07-14 21:09:12.879013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.075 [2024-07-14 21:09:12.879031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.075 [2024-07-14 21:09:12.879084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:16.075 [2024-07-14 21:09:12.879100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.075 #63 NEW cov: 12082 ft: 15013 corp: 37/501b lim: 25 exec/s: 63 rss: 71Mb L: 16/25 MS: 1 InsertRepeatedBytes- 00:08:16.075 [2024-07-14 21:09:12.918927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-07-14 21:09:12.918954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 [2024-07-14 21:09:12.919005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.075 [2024-07-14 21:09:12.919021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.075 #64 NEW cov: 12082 ft: 15031 corp: 38/513b lim: 25 exec/s: 64 rss: 71Mb L: 12/25 MS: 1 CopyPart- 00:08:16.075 [2024-07-14 21:09:12.969044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-07-14 21:09:12.969069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 [2024-07-14 21:09:12.969106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:16.075 [2024-07-14 21:09:12.969137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.341 #65 NEW cov: 12082 ft: 15037 corp: 39/525b lim: 25 exec/s: 32 rss: 71Mb L: 12/25 MS: 1 ChangeByte- 00:08:16.341 #65 DONE cov: 12082 ft: 15037 corp: 39/525b lim: 25 exec/s: 32 rss: 71Mb 00:08:16.341 ###### Recommended dictionary. ###### 00:08:16.341 "\000\000\001\000" # Uses: 6 00:08:16.341 ###### End of recommended dictionary. ###### 00:08:16.341 Done 65 runs in 2 second(s) 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:16.341 21:09:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:16.341 [2024-07-14 21:09:13.153241] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:16.341 [2024-07-14 21:09:13.153310] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028252 ] 00:08:16.341 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.604 [2024-07-14 21:09:13.334195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.604 [2024-07-14 21:09:13.356481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.604 [2024-07-14 21:09:13.409083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.604 [2024-07-14 21:09:13.425398] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:16.604 INFO: Running with entropic power schedule (0xFF, 100). 00:08:16.604 INFO: Seed: 705139249 00:08:16.604 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:16.604 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:16.604 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:16.604 INFO: A corpus is not provided, starting from an empty corpus 00:08:16.604 #2 INITED exec/s: 0 rss: 63Mb 00:08:16.604 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:16.604 This may also happen if the target rejected all inputs we tried so far 00:08:16.604 [2024-07-14 21:09:13.473260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070958088191 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.604 [2024-07-14 21:09:13.473297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.120 NEW_FUNC[1/693]: 0x4bf450 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:17.120 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:17.120 #4 NEW cov: 11908 ft: 11909 corp: 2/24b lim: 100 exec/s: 0 rss: 70Mb L: 23/23 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:17.120 [2024-07-14 21:09:13.814113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.120 [2024-07-14 21:09:13.814160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.120 #9 NEW cov: 12040 ft: 12491 corp: 3/57b lim: 100 exec/s: 0 rss: 70Mb L: 33/33 MS: 5 ChangeByte-CrossOver-EraseBytes-InsertByte-CrossOver- 00:08:17.120 [2024-07-14 21:09:13.874072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.120 [2024-07-14 21:09:13.874104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.120 #10 NEW cov: 12046 ft: 12814 corp: 4/90b lim: 100 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeBinInt- 00:08:17.120 [2024-07-14 21:09:13.954329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070958088191 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.120 [2024-07-14 21:09:13.954360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.120 #11 NEW cov: 12131 ft: 13089 corp: 5/113b lim: 100 exec/s: 0 rss: 70Mb L: 23/33 MS: 1 CopyPart- 00:08:17.378 [2024-07-14 21:09:14.034603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:10883 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.378 [2024-07-14 21:09:14.034636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.378 [2024-07-14 21:09:14.034686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.378 [2024-07-14 21:09:14.034706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.378 #12 NEW cov: 12131 ft: 13950 corp: 6/154b lim: 100 exec/s: 0 rss: 70Mb L: 41/41 MS: 1 CMP- DE: "\001*\202g\202\230|f"- 00:08:17.378 [2024-07-14 21:09:14.114799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.378 [2024-07-14 21:09:14.114828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.378 [2024-07-14 21:09:14.114862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.378 [2024-07-14 21:09:14.114880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.378 #13 NEW cov: 12131 ft: 14070 corp: 7/213b lim: 100 exec/s: 0 rss: 70Mb L: 59/59 MS: 1 CopyPart- 00:08:17.378 [2024-07-14 21:09:14.174957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.378 [2024-07-14 21:09:14.174987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.378 [2024-07-14 21:09:14.175021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.378 [2024-07-14 21:09:14.175039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.378 #14 NEW cov: 12131 ft: 14162 corp: 8/264b lim: 100 exec/s: 0 rss: 70Mb L: 51/59 MS: 1 CrossOver- 00:08:17.378 [2024-07-14 21:09:14.255063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070958088191 len:65520 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.378 [2024-07-14 21:09:14.255093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.637 #15 NEW cov: 12131 ft: 14190 corp: 9/287b lim: 100 exec/s: 0 rss: 70Mb L: 23/59 MS: 1 ChangeBit- 00:08:17.637 [2024-07-14 21:09:14.335248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.637 [2024-07-14 21:09:14.335277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.637 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:17.637 #16 NEW cov: 12148 ft: 14289 corp: 10/311b lim: 100 exec/s: 0 rss: 70Mb L: 24/59 MS: 1 EraseBytes- 00:08:17.637 [2024-07-14 21:09:14.395429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.637 [2024-07-14 21:09:14.395465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.637 #17 NEW cov: 12148 ft: 14341 corp: 11/344b lim: 100 exec/s: 0 rss: 70Mb L: 33/59 MS: 1 ShuffleBytes- 00:08:17.637 [2024-07-14 21:09:14.445636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.637 [2024-07-14 21:09:14.445666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.637 [2024-07-14 21:09:14.445703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.637 [2024-07-14 21:09:14.445721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.637 #18 NEW cov: 12148 ft: 14359 corp: 12/397b lim: 100 exec/s: 18 rss: 70Mb L: 53/59 MS: 1 InsertRepeatedBytes- 00:08:17.637 [2024-07-14 21:09:14.495788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.637 [2024-07-14 21:09:14.495818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.637 [2024-07-14 21:09:14.495867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072843165695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.637 [2024-07-14 21:09:14.495893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.637 [2024-07-14 21:09:14.495924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.637 [2024-07-14 21:09:14.495941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.896 #19 NEW cov: 12148 ft: 14731 corp: 13/471b lim: 100 exec/s: 19 rss: 71Mb L: 74/74 MS: 1 CrossOver- 00:08:17.896 [2024-07-14 21:09:14.575999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446743369331530751 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.896 [2024-07-14 21:09:14.576029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.896 [2024-07-14 21:09:14.576078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.896 [2024-07-14 21:09:14.576101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.896 [2024-07-14 21:09:14.576132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.896 [2024-07-14 21:09:14.576148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.896 #23 NEW cov: 12148 ft: 14745 corp: 14/548b lim: 100 exec/s: 23 rss: 71Mb L: 77/77 MS: 4 CrossOver-CopyPart-CrossOver-InsertRepeatedBytes- 00:08:17.896 [2024-07-14 21:09:14.636027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:792633534400430080 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.896 [2024-07-14 21:09:14.636057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.896 #24 NEW cov: 12148 ft: 14825 corp: 15/579b lim: 100 exec/s: 24 rss: 71Mb L: 31/77 MS: 1 EraseBytes- 00:08:17.896 [2024-07-14 21:09:14.696257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4467570834646499327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.896 [2024-07-14 21:09:14.696286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.896 [2024-07-14 21:09:14.696334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.896 [2024-07-14 21:09:14.696358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.896 #25 NEW cov: 12148 ft: 14837 corp: 16/620b lim: 100 exec/s: 25 rss: 71Mb L: 41/77 MS: 1 CMP- DE: ">\000\000\000\000\000\000\000"- 00:08:17.896 [2024-07-14 21:09:14.776408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070958088191 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:17.896 [2024-07-14 21:09:14.776438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.156 #26 NEW cov: 12148 ft: 14863 corp: 17/643b lim: 100 exec/s: 26 rss: 71Mb L: 23/77 MS: 1 ChangeByte- 00:08:18.156 [2024-07-14 21:09:14.826547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:10883 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:14.826577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.156 #27 NEW cov: 12148 ft: 14886 corp: 18/674b lim: 100 exec/s: 27 rss: 71Mb L: 31/77 MS: 1 EraseBytes- 00:08:18.156 [2024-07-14 21:09:14.906753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:14.906784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.156 #28 NEW cov: 12148 ft: 14903 corp: 19/698b lim: 100 exec/s: 28 rss: 71Mb L: 24/77 MS: 1 ShuffleBytes- 00:08:18.156 [2024-07-14 21:09:14.987060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:14.987090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.156 [2024-07-14 21:09:14.987138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:14.987162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.156 #29 NEW cov: 12148 ft: 14926 corp: 20/757b lim: 100 exec/s: 29 rss: 71Mb L: 59/77 MS: 1 CrossOver- 00:08:18.156 [2024-07-14 21:09:15.037263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:15.037293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.156 [2024-07-14 21:09:15.037340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:15.037364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.156 [2024-07-14 21:09:15.037394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:15.037410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.156 [2024-07-14 21:09:15.037438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.156 [2024-07-14 21:09:15.037464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.415 #30 NEW cov: 12148 ft: 15326 corp: 21/840b lim: 100 exec/s: 30 rss: 71Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:08:18.415 [2024-07-14 21:09:15.097429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.097468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.415 [2024-07-14 21:09:15.097519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.097537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.415 [2024-07-14 21:09:15.097567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.097583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.415 [2024-07-14 21:09:15.097612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.097628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.415 #31 NEW cov: 12148 ft: 15328 corp: 22/923b lim: 100 exec/s: 31 rss: 71Mb L: 83/83 MS: 1 ChangeByte- 00:08:18.415 [2024-07-14 21:09:15.177536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3386706919782612991 len:10883 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.177567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.415 [2024-07-14 21:09:15.177616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.177637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.415 #32 NEW cov: 12148 ft: 15392 corp: 23/964b lim: 100 exec/s: 32 rss: 71Mb L: 41/83 MS: 1 ChangeByte- 00:08:18.415 [2024-07-14 21:09:15.237823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.237853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.415 [2024-07-14 21:09:15.237900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072843165695 len:65343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.237923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.415 [2024-07-14 21:09:15.237953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.237969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.415 [2024-07-14 21:09:15.237998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744069414584330 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.415 [2024-07-14 21:09:15.238014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.415 #33 NEW cov: 12148 ft: 15435 corp: 24/1046b lim: 100 exec/s: 33 rss: 71Mb L: 82/83 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:08:18.675 [2024-07-14 21:09:15.317936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3386706919782612991 len:10883 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.675 [2024-07-14 21:09:15.317969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.675 [2024-07-14 21:09:15.318003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:52316 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.675 [2024-07-14 21:09:15.318022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.675 #34 NEW cov: 12155 ft: 15458 corp: 25/1087b lim: 100 exec/s: 34 rss: 71Mb L: 41/83 MS: 1 CrossOver- 00:08:18.675 [2024-07-14 21:09:15.398050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.675 [2024-07-14 21:09:15.398080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.675 #35 NEW cov: 12155 ft: 15483 corp: 26/1120b lim: 100 exec/s: 35 rss: 71Mb L: 33/83 MS: 1 ShuffleBytes- 00:08:18.675 [2024-07-14 21:09:15.448240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:23552 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.675 [2024-07-14 21:09:15.448270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.675 [2024-07-14 21:09:15.448317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.675 [2024-07-14 21:09:15.448341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.675 #36 NEW cov: 12155 ft: 15511 corp: 27/1179b lim: 100 exec/s: 18 rss: 71Mb L: 59/83 MS: 1 ChangeBinInt- 00:08:18.675 #36 DONE cov: 12155 ft: 15511 corp: 27/1179b lim: 100 exec/s: 18 rss: 71Mb 00:08:18.675 ###### Recommended dictionary. ###### 00:08:18.675 "\001*\202g\202\230|f" # Uses: 0 00:08:18.675 ">\000\000\000\000\000\000\000" # Uses: 1 00:08:18.675 ###### End of recommended dictionary. ###### 00:08:18.675 Done 36 runs in 2 second(s) 00:08:18.934 21:09:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:18.934 21:09:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:18.934 21:09:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.934 21:09:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:18.934 00:08:18.934 real 1m3.012s 00:08:18.934 user 1m39.017s 00:08:18.934 sys 0m7.627s 00:08:18.934 21:09:15 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.934 21:09:15 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:18.934 ************************************ 00:08:18.934 END TEST nvmf_fuzz 00:08:18.934 ************************************ 00:08:18.934 21:09:15 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:18.934 21:09:15 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:18.934 21:09:15 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:18.934 21:09:15 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:18.934 21:09:15 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.934 21:09:15 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:18.934 ************************************ 00:08:18.934 START TEST vfio_fuzz 00:08:18.934 ************************************ 00:08:18.934 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:18.934 * Looking for test storage... 00:08:18.934 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:18.934 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:18.934 21:09:15 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:18.934 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:18.934 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:18.935 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:18.936 #define SPDK_CONFIG_H 00:08:18.936 #define SPDK_CONFIG_APPS 1 00:08:18.936 #define SPDK_CONFIG_ARCH native 00:08:18.936 #undef SPDK_CONFIG_ASAN 00:08:18.936 #undef SPDK_CONFIG_AVAHI 00:08:18.936 #undef SPDK_CONFIG_CET 00:08:18.936 #define SPDK_CONFIG_COVERAGE 1 00:08:18.936 #define SPDK_CONFIG_CROSS_PREFIX 00:08:18.936 #undef SPDK_CONFIG_CRYPTO 00:08:18.936 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:18.936 #undef SPDK_CONFIG_CUSTOMOCF 00:08:18.936 #undef SPDK_CONFIG_DAOS 00:08:18.936 #define SPDK_CONFIG_DAOS_DIR 00:08:18.936 #define SPDK_CONFIG_DEBUG 1 00:08:18.936 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:18.936 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:18.936 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:18.936 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:18.936 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:18.936 #undef SPDK_CONFIG_DPDK_UADK 00:08:18.936 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:18.936 #define SPDK_CONFIG_EXAMPLES 1 00:08:18.936 #undef SPDK_CONFIG_FC 00:08:18.936 #define SPDK_CONFIG_FC_PATH 00:08:18.936 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:18.936 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:18.936 #undef SPDK_CONFIG_FUSE 00:08:18.936 #define SPDK_CONFIG_FUZZER 1 00:08:18.936 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:18.936 #undef SPDK_CONFIG_GOLANG 00:08:18.936 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:18.936 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:18.936 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:18.936 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:18.936 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:18.936 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:18.936 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:18.936 #define SPDK_CONFIG_IDXD 1 00:08:18.936 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:18.936 #undef SPDK_CONFIG_IPSEC_MB 00:08:18.936 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:18.936 #define SPDK_CONFIG_ISAL 1 00:08:18.936 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:18.936 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:18.936 #define SPDK_CONFIG_LIBDIR 00:08:18.936 #undef SPDK_CONFIG_LTO 00:08:18.936 #define SPDK_CONFIG_MAX_LCORES 00:08:18.936 #define SPDK_CONFIG_NVME_CUSE 1 00:08:18.936 #undef SPDK_CONFIG_OCF 00:08:18.936 #define SPDK_CONFIG_OCF_PATH 00:08:18.936 #define SPDK_CONFIG_OPENSSL_PATH 00:08:18.936 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:18.936 #define SPDK_CONFIG_PGO_DIR 00:08:18.936 #undef SPDK_CONFIG_PGO_USE 00:08:18.936 #define SPDK_CONFIG_PREFIX /usr/local 00:08:18.936 #undef SPDK_CONFIG_RAID5F 00:08:18.936 #undef SPDK_CONFIG_RBD 00:08:18.936 #define SPDK_CONFIG_RDMA 1 00:08:18.936 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:18.936 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:18.936 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:18.936 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:18.936 #undef SPDK_CONFIG_SHARED 00:08:18.936 #undef SPDK_CONFIG_SMA 00:08:18.936 #define SPDK_CONFIG_TESTS 1 00:08:18.936 #undef SPDK_CONFIG_TSAN 00:08:18.936 #define SPDK_CONFIG_UBLK 1 00:08:18.936 #define SPDK_CONFIG_UBSAN 1 00:08:18.936 #undef SPDK_CONFIG_UNIT_TESTS 00:08:18.936 #undef SPDK_CONFIG_URING 00:08:18.936 #define SPDK_CONFIG_URING_PATH 00:08:18.936 #undef SPDK_CONFIG_URING_ZNS 00:08:18.936 #undef SPDK_CONFIG_USDT 00:08:18.936 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:18.936 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:18.936 #define SPDK_CONFIG_VFIO_USER 1 00:08:18.936 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:18.936 #define SPDK_CONFIG_VHOST 1 00:08:18.936 #define SPDK_CONFIG_VIRTIO 1 00:08:18.936 #undef SPDK_CONFIG_VTUNE 00:08:18.936 #define SPDK_CONFIG_VTUNE_DIR 00:08:18.936 #define SPDK_CONFIG_WERROR 1 00:08:18.936 #define SPDK_CONFIG_WPDK_DIR 00:08:18.936 #undef SPDK_CONFIG_XNVME 00:08:18.936 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:18.936 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@57 -- # : 1 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@61 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # : 1 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # : 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # : 1 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # : 1 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # : 1 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # : 0 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:08:19.197 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # : true 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # : 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@166 -- # : 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # : 0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # cat 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:19.198 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # [[ -z 4028821 ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # kill -0 4028821 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.HEdCGP 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.HEdCGP/tests/vfio /tmp/spdk.HEdCGP 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # df -T 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=954408960 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4330020864 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52891144192 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742317568 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8851173376 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866448384 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342484992 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348465152 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5980160 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870433792 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=724992 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:19.199 * Looking for test storage... 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@373 -- # target_space=52891144192 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # new_size=11065765888 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.199 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # return 0 00:08:19.199 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # true 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:19.200 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:19.200 21:09:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:19.200 [2024-07-14 21:09:16.007069] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:19.200 [2024-07-14 21:09:16.007162] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028861 ] 00:08:19.200 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.200 [2024-07-14 21:09:16.080081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.459 [2024-07-14 21:09:16.122530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.459 INFO: Running with entropic power schedule (0xFF, 100). 00:08:19.459 INFO: Seed: 3561123773 00:08:19.459 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:19.459 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:19.459 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:19.459 INFO: A corpus is not provided, starting from an empty corpus 00:08:19.459 #2 INITED exec/s: 0 rss: 63Mb 00:08:19.459 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:19.459 This may also happen if the target rejected all inputs we tried so far 00:08:19.459 [2024-07-14 21:09:16.350032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:19.977 NEW_FUNC[1/656]: 0x4933d0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:19.977 NEW_FUNC[2/656]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:19.977 #11 NEW cov: 10922 ft: 10480 corp: 2/7b lim: 6 exec/s: 0 rss: 68Mb L: 6/6 MS: 4 CopyPart-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:19.977 #22 NEW cov: 10938 ft: 14272 corp: 3/13b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 ChangeByte- 00:08:20.236 #28 NEW cov: 10938 ft: 14789 corp: 4/19b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 ChangeByte- 00:08:20.236 #29 NEW cov: 10938 ft: 15106 corp: 5/25b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 ChangeByte- 00:08:20.495 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:20.495 #30 NEW cov: 10955 ft: 15601 corp: 6/31b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:20.495 #37 NEW cov: 10955 ft: 15913 corp: 7/37b lim: 6 exec/s: 37 rss: 70Mb L: 6/6 MS: 2 EraseBytes-CopyPart- 00:08:20.753 #43 NEW cov: 10955 ft: 16583 corp: 8/43b lim: 6 exec/s: 43 rss: 70Mb L: 6/6 MS: 1 CopyPart- 00:08:20.753 #44 NEW cov: 10955 ft: 16650 corp: 9/49b lim: 6 exec/s: 44 rss: 70Mb L: 6/6 MS: 1 CopyPart- 00:08:21.011 #50 NEW cov: 10955 ft: 16908 corp: 10/55b lim: 6 exec/s: 50 rss: 70Mb L: 6/6 MS: 1 ChangeBit- 00:08:21.011 #53 NEW cov: 10955 ft: 16967 corp: 11/61b lim: 6 exec/s: 53 rss: 70Mb L: 6/6 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:08:21.269 #54 NEW cov: 10955 ft: 17172 corp: 12/67b lim: 6 exec/s: 54 rss: 71Mb L: 6/6 MS: 1 ChangeByte- 00:08:21.269 #55 NEW cov: 10955 ft: 17585 corp: 13/73b lim: 6 exec/s: 55 rss: 71Mb L: 6/6 MS: 1 ChangeByte- 00:08:21.528 #56 NEW cov: 10962 ft: 17622 corp: 14/79b lim: 6 exec/s: 56 rss: 71Mb L: 6/6 MS: 1 CrossOver- 00:08:21.528 #57 NEW cov: 10962 ft: 17732 corp: 15/85b lim: 6 exec/s: 57 rss: 71Mb L: 6/6 MS: 1 ChangeBit- 00:08:21.528 #58 NEW cov: 10962 ft: 17820 corp: 16/91b lim: 6 exec/s: 29 rss: 71Mb L: 6/6 MS: 1 CrossOver- 00:08:21.528 #58 DONE cov: 10962 ft: 17820 corp: 16/91b lim: 6 exec/s: 29 rss: 71Mb 00:08:21.528 Done 58 runs in 2 second(s) 00:08:21.787 [2024-07-14 21:09:18.446644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:21.787 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:22.045 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:22.045 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:22.045 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.045 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:22.046 21:09:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:22.046 [2024-07-14 21:09:18.726330] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:22.046 [2024-07-14 21:09:18.726403] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029399 ] 00:08:22.046 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.046 [2024-07-14 21:09:18.796419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.046 [2024-07-14 21:09:18.833914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.305 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.305 INFO: Seed: 1977154396 00:08:22.305 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:22.305 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:22.305 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:22.305 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.305 #2 INITED exec/s: 0 rss: 63Mb 00:08:22.305 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.305 This may also happen if the target rejected all inputs we tried so far 00:08:22.305 [2024-07-14 21:09:19.060657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:22.305 [2024-07-14 21:09:19.105518] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:22.305 [2024-07-14 21:09:19.105542] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:22.305 [2024-07-14 21:09:19.105563] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:22.821 NEW_FUNC[1/656]: 0x493970 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:22.821 NEW_FUNC[2/656]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:22.821 #41 NEW cov: 10915 ft: 10799 corp: 2/5b lim: 4 exec/s: 0 rss: 68Mb L: 4/4 MS: 4 CopyPart-CrossOver-ChangeBit-CopyPart- 00:08:22.821 [2024-07-14 21:09:19.554456] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:22.821 [2024-07-14 21:09:19.554502] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:22.821 [2024-07-14 21:09:19.554520] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:22.821 NEW_FUNC[1/2]: 0x16d48d0 in _is_io_flags_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:141 00:08:22.821 NEW_FUNC[2/2]: 0x16f1300 in _nvme_md_excluded_from_xfer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:54 00:08:22.821 #42 NEW cov: 10934 ft: 14072 corp: 3/9b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:22.821 [2024-07-14 21:09:19.722218] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:22.821 [2024-07-14 21:09:19.722241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:22.821 [2024-07-14 21:09:19.722259] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.080 #58 NEW cov: 10934 ft: 14721 corp: 4/13b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:23.080 [2024-07-14 21:09:19.893256] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.080 [2024-07-14 21:09:19.893279] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.080 [2024-07-14 21:09:19.893296] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.338 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:23.338 #59 NEW cov: 10951 ft: 15891 corp: 5/17b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeByte- 00:08:23.338 [2024-07-14 21:09:20.074732] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.338 [2024-07-14 21:09:20.074760] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.338 [2024-07-14 21:09:20.074778] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.338 #65 NEW cov: 10951 ft: 16829 corp: 6/21b lim: 4 exec/s: 65 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:08:23.596 [2024-07-14 21:09:20.248835] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.596 [2024-07-14 21:09:20.248860] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.596 [2024-07-14 21:09:20.248876] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.596 #66 NEW cov: 10951 ft: 17418 corp: 7/25b lim: 4 exec/s: 66 rss: 70Mb L: 4/4 MS: 1 CrossOver- 00:08:23.596 [2024-07-14 21:09:20.409764] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.596 [2024-07-14 21:09:20.409787] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.596 [2024-07-14 21:09:20.409813] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.854 #72 NEW cov: 10951 ft: 17681 corp: 8/29b lim: 4 exec/s: 72 rss: 70Mb L: 4/4 MS: 1 ChangeByte- 00:08:23.854 [2024-07-14 21:09:20.576396] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.854 [2024-07-14 21:09:20.576418] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.854 [2024-07-14 21:09:20.576435] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:23.854 #73 NEW cov: 10951 ft: 17720 corp: 9/33b lim: 4 exec/s: 73 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:23.854 [2024-07-14 21:09:20.742724] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:23.854 [2024-07-14 21:09:20.742750] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:23.854 [2024-07-14 21:09:20.742768] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.112 #74 NEW cov: 10951 ft: 17801 corp: 10/37b lim: 4 exec/s: 74 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:08:24.112 [2024-07-14 21:09:20.907073] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.112 [2024-07-14 21:09:20.907094] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.112 [2024-07-14 21:09:20.907112] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.112 #87 NEW cov: 10951 ft: 18065 corp: 11/41b lim: 4 exec/s: 87 rss: 70Mb L: 4/4 MS: 3 EraseBytes-ChangeBinInt-CopyPart- 00:08:24.370 [2024-07-14 21:09:21.075800] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:24.370 [2024-07-14 21:09:21.075822] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:24.370 [2024-07-14 21:09:21.075839] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:24.370 #88 NEW cov: 10951 ft: 18261 corp: 12/45b lim: 4 exec/s: 44 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:24.370 #88 DONE cov: 10951 ft: 18261 corp: 12/45b lim: 4 exec/s: 44 rss: 70Mb 00:08:24.370 Done 88 runs in 2 second(s) 00:08:24.370 [2024-07-14 21:09:21.191635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:24.629 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:24.629 21:09:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:24.629 [2024-07-14 21:09:21.475219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:24.629 [2024-07-14 21:09:21.475312] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029869 ] 00:08:24.629 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.887 [2024-07-14 21:09:21.550272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.887 [2024-07-14 21:09:21.589057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.887 INFO: Running with entropic power schedule (0xFF, 100). 00:08:24.887 INFO: Seed: 446207282 00:08:24.887 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:24.887 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:24.887 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:24.888 INFO: A corpus is not provided, starting from an empty corpus 00:08:24.888 #2 INITED exec/s: 0 rss: 63Mb 00:08:24.888 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:24.888 This may also happen if the target rejected all inputs we tried so far 00:08:25.146 [2024-07-14 21:09:21.823413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:25.146 [2024-07-14 21:09:21.865553] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:25.404 NEW_FUNC[1/655]: 0x494350 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:25.404 NEW_FUNC[2/655]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:25.404 #44 NEW cov: 10899 ft: 10863 corp: 2/9b lim: 8 exec/s: 0 rss: 69Mb L: 8/8 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:25.662 [2024-07-14 21:09:22.325815] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:25.662 NEW_FUNC[1/2]: 0x16d48d0 in _is_io_flags_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:141 00:08:25.662 NEW_FUNC[2/2]: 0x16f1300 in _nvme_md_excluded_from_xfer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:54 00:08:25.662 #50 NEW cov: 10918 ft: 13875 corp: 3/17b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 CopyPart- 00:08:25.662 [2024-07-14 21:09:22.502288] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:25.920 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:25.920 #55 NEW cov: 10938 ft: 15432 corp: 4/25b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 5 CrossOver-InsertByte-ShuffleBytes-ChangeBit-CrossOver- 00:08:25.920 [2024-07-14 21:09:22.681360] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:25.920 #56 NEW cov: 10938 ft: 15988 corp: 5/33b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CopyPart- 00:08:26.179 [2024-07-14 21:09:22.846088] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.179 #57 NEW cov: 10938 ft: 16543 corp: 6/41b lim: 8 exec/s: 57 rss: 71Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:26.179 [2024-07-14 21:09:23.011793] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.437 #62 NEW cov: 10938 ft: 17199 corp: 7/49b lim: 8 exec/s: 62 rss: 71Mb L: 8/8 MS: 5 ChangeBit-CrossOver-ShuffleBytes-InsertRepeatedBytes-CopyPart- 00:08:26.437 [2024-07-14 21:09:23.191025] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.437 #63 NEW cov: 10938 ft: 17518 corp: 8/57b lim: 8 exec/s: 63 rss: 71Mb L: 8/8 MS: 1 ChangeByte- 00:08:26.696 [2024-07-14 21:09:23.360486] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.696 #69 NEW cov: 10938 ft: 17662 corp: 9/65b lim: 8 exec/s: 69 rss: 71Mb L: 8/8 MS: 1 CopyPart- 00:08:26.696 [2024-07-14 21:09:23.526230] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.954 #70 NEW cov: 10945 ft: 17856 corp: 10/73b lim: 8 exec/s: 70 rss: 71Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:08:26.954 [2024-07-14 21:09:23.694223] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:26.954 #71 NEW cov: 10945 ft: 17904 corp: 11/81b lim: 8 exec/s: 35 rss: 71Mb L: 8/8 MS: 1 ChangeBit- 00:08:26.954 #71 DONE cov: 10945 ft: 17904 corp: 11/81b lim: 8 exec/s: 35 rss: 71Mb 00:08:26.954 Done 71 runs in 2 second(s) 00:08:26.954 [2024-07-14 21:09:23.815624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:27.212 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:27.213 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:27.213 21:09:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:27.213 [2024-07-14 21:09:24.095611] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:27.213 [2024-07-14 21:09:24.095683] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030229 ] 00:08:27.472 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.472 [2024-07-14 21:09:24.168450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.472 [2024-07-14 21:09:24.206477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.730 INFO: Running with entropic power schedule (0xFF, 100). 00:08:27.730 INFO: Seed: 3062196401 00:08:27.730 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:27.730 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:27.730 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:27.730 INFO: A corpus is not provided, starting from an empty corpus 00:08:27.730 #2 INITED exec/s: 0 rss: 64Mb 00:08:27.730 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:27.730 This may also happen if the target rejected all inputs we tried so far 00:08:27.730 [2024-07-14 21:09:24.440198] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:27.989 NEW_FUNC[1/657]: 0x494a30 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:27.989 NEW_FUNC[2/657]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:27.989 #37 NEW cov: 10912 ft: 10884 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 5 ChangeByte-CrossOver-InsertByte-CrossOver-InsertRepeatedBytes- 00:08:28.248 #38 NEW cov: 10926 ft: 14187 corp: 3/65b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:28.507 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:28.507 #44 NEW cov: 10946 ft: 15068 corp: 4/97b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:28.765 #45 NEW cov: 10946 ft: 15530 corp: 5/129b lim: 32 exec/s: 45 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:28.765 #46 NEW cov: 10946 ft: 15957 corp: 6/161b lim: 32 exec/s: 46 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:08:29.025 #47 NEW cov: 10946 ft: 16240 corp: 7/193b lim: 32 exec/s: 47 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:29.408 #48 NEW cov: 10946 ft: 16857 corp: 8/225b lim: 32 exec/s: 48 rss: 72Mb L: 32/32 MS: 1 CrossOver- 00:08:29.408 #49 NEW cov: 10946 ft: 17282 corp: 9/257b lim: 32 exec/s: 49 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:08:29.408 #50 NEW cov: 10953 ft: 17331 corp: 10/289b lim: 32 exec/s: 50 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:08:29.667 #51 NEW cov: 10953 ft: 17352 corp: 11/321b lim: 32 exec/s: 25 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:29.667 #51 DONE cov: 10953 ft: 17352 corp: 11/321b lim: 32 exec/s: 25 rss: 73Mb 00:08:29.667 Done 51 runs in 2 second(s) 00:08:29.667 [2024-07-14 21:09:26.492627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:29.926 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:29.926 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:29.927 21:09:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:29.927 [2024-07-14 21:09:26.772147] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:29.927 [2024-07-14 21:09:26.772232] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030765 ] 00:08:29.927 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.186 [2024-07-14 21:09:26.843137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.186 [2024-07-14 21:09:26.879934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.186 INFO: Running with entropic power schedule (0xFF, 100). 00:08:30.186 INFO: Seed: 1433226386 00:08:30.186 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:30.186 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:30.186 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:30.186 INFO: A corpus is not provided, starting from an empty corpus 00:08:30.186 #2 INITED exec/s: 0 rss: 63Mb 00:08:30.186 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:30.186 This may also happen if the target rejected all inputs we tried so far 00:08:30.445 [2024-07-14 21:09:27.106400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:30.704 NEW_FUNC[1/657]: 0x4952b0 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:30.704 NEW_FUNC[2/657]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:30.704 #26 NEW cov: 10909 ft: 10839 corp: 2/33b lim: 32 exec/s: 0 rss: 68Mb L: 32/32 MS: 4 InsertRepeatedBytes-ShuffleBytes-ChangeBinInt-CopyPart- 00:08:30.963 #37 NEW cov: 10924 ft: 14238 corp: 3/65b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:08:31.222 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:31.222 #38 NEW cov: 10944 ft: 15826 corp: 4/97b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:31.222 #44 NEW cov: 10944 ft: 16533 corp: 5/129b lim: 32 exec/s: 44 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:08:31.481 #45 NEW cov: 10944 ft: 16582 corp: 6/161b lim: 32 exec/s: 45 rss: 70Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:31.740 #61 NEW cov: 10944 ft: 17048 corp: 7/193b lim: 32 exec/s: 61 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:08:31.999 #62 NEW cov: 10944 ft: 17236 corp: 8/225b lim: 32 exec/s: 62 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:08:31.999 #68 NEW cov: 10944 ft: 17590 corp: 9/257b lim: 32 exec/s: 68 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:08:32.258 #69 NEW cov: 10951 ft: 17650 corp: 10/289b lim: 32 exec/s: 69 rss: 70Mb L: 32/32 MS: 1 CrossOver- 00:08:32.589 #70 NEW cov: 10951 ft: 18048 corp: 11/321b lim: 32 exec/s: 35 rss: 70Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:32.589 #70 DONE cov: 10951 ft: 18048 corp: 11/321b lim: 32 exec/s: 35 rss: 70Mb 00:08:32.589 Done 70 runs in 2 second(s) 00:08:32.589 [2024-07-14 21:09:29.213630] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:32.589 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:32.589 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:32.855 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:32.855 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:32.855 21:09:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:32.855 [2024-07-14 21:09:29.482891] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:32.855 [2024-07-14 21:09:29.482949] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031298 ] 00:08:32.855 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.855 [2024-07-14 21:09:29.547439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.855 [2024-07-14 21:09:29.586775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.114 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.114 INFO: Seed: 4148239898 00:08:33.114 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:33.114 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:33.114 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:33.114 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.114 #2 INITED exec/s: 0 rss: 64Mb 00:08:33.114 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.114 This may also happen if the target rejected all inputs we tried so far 00:08:33.114 [2024-07-14 21:09:29.817874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:33.114 [2024-07-14 21:09:29.865502] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.114 [2024-07-14 21:09:29.865539] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:33.373 NEW_FUNC[1/658]: 0x495cb0 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:33.373 NEW_FUNC[2/658]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:33.373 #55 NEW cov: 10918 ft: 10858 corp: 2/14b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 3 InsertByte-CMP-InsertRepeatedBytes- DE: "\010\000"- 00:08:33.632 [2024-07-14 21:09:30.344161] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.632 [2024-07-14 21:09:30.344207] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:33.632 #64 NEW cov: 10940 ft: 13667 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 4 CrossOver-InsertByte-CopyPart-InsertByte- 00:08:33.632 [2024-07-14 21:09:30.515553] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.632 [2024-07-14 21:09:30.515586] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:33.891 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:33.891 #70 NEW cov: 10957 ft: 15471 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:33.891 [2024-07-14 21:09:30.697107] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.891 [2024-07-14 21:09:30.697145] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.150 #76 NEW cov: 10957 ft: 15935 corp: 5/53b lim: 13 exec/s: 76 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:34.150 [2024-07-14 21:09:30.870476] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.150 [2024-07-14 21:09:30.870507] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.150 #77 NEW cov: 10957 ft: 16374 corp: 6/66b lim: 13 exec/s: 77 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:34.150 [2024-07-14 21:09:31.041350] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.150 [2024-07-14 21:09:31.041380] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.409 #83 NEW cov: 10957 ft: 16437 corp: 7/79b lim: 13 exec/s: 83 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:34.409 [2024-07-14 21:09:31.209425] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.409 [2024-07-14 21:09:31.209460] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.409 #84 NEW cov: 10957 ft: 16830 corp: 8/92b lim: 13 exec/s: 84 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:08:34.668 [2024-07-14 21:09:31.381131] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.668 [2024-07-14 21:09:31.381162] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.668 #85 NEW cov: 10957 ft: 16894 corp: 9/105b lim: 13 exec/s: 85 rss: 73Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:34.668 [2024-07-14 21:09:31.551450] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.668 [2024-07-14 21:09:31.551480] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.927 #86 NEW cov: 10964 ft: 16969 corp: 10/118b lim: 13 exec/s: 86 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:08:34.927 [2024-07-14 21:09:31.721544] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:34.927 [2024-07-14 21:09:31.721576] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:34.927 #87 NEW cov: 10964 ft: 17314 corp: 11/131b lim: 13 exec/s: 43 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:34.927 #87 DONE cov: 10964 ft: 17314 corp: 11/131b lim: 13 exec/s: 43 rss: 74Mb 00:08:34.927 ###### Recommended dictionary. ###### 00:08:34.927 "\010\000" # Uses: 3 00:08:34.927 ###### End of recommended dictionary. ###### 00:08:34.927 Done 87 runs in 2 second(s) 00:08:35.186 [2024-07-14 21:09:31.840626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:35.186 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:35.445 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:35.445 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:35.446 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:35.446 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:35.446 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:35.446 21:09:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:35.446 [2024-07-14 21:09:32.125333] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:35.446 [2024-07-14 21:09:32.125405] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031693 ] 00:08:35.446 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.446 [2024-07-14 21:09:32.197231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.446 [2024-07-14 21:09:32.235792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.705 INFO: Running with entropic power schedule (0xFF, 100). 00:08:35.705 INFO: Seed: 2499268442 00:08:35.705 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:35.705 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:35.705 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:35.705 INFO: A corpus is not provided, starting from an empty corpus 00:08:35.705 #2 INITED exec/s: 0 rss: 63Mb 00:08:35.705 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:35.705 This may also happen if the target rejected all inputs we tried so far 00:08:35.705 [2024-07-14 21:09:32.467328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:35.705 [2024-07-14 21:09:32.512500] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:35.705 [2024-07-14 21:09:32.512533] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.223 NEW_FUNC[1/658]: 0x4969a0 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:36.223 NEW_FUNC[2/658]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:36.223 #24 NEW cov: 10895 ft: 10821 corp: 2/10b lim: 9 exec/s: 0 rss: 69Mb L: 9/9 MS: 2 InsertRepeatedBytes-CopyPart- 00:08:36.223 [2024-07-14 21:09:32.964669] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.223 [2024-07-14 21:09:32.964714] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.223 #43 NEW cov: 10928 ft: 13423 corp: 3/19b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 4 ChangeBit-ChangeBinInt-InsertRepeatedBytes-CrossOver- 00:08:36.481 [2024-07-14 21:09:33.143584] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.481 [2024-07-14 21:09:33.143615] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.481 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:36.481 #44 NEW cov: 10945 ft: 14390 corp: 4/28b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:36.481 [2024-07-14 21:09:33.316569] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.481 [2024-07-14 21:09:33.316599] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.739 #45 NEW cov: 10945 ft: 15373 corp: 5/37b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:36.739 [2024-07-14 21:09:33.486646] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.739 [2024-07-14 21:09:33.486677] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.739 #51 NEW cov: 10945 ft: 16400 corp: 6/46b lim: 9 exec/s: 51 rss: 70Mb L: 9/9 MS: 1 ChangeByte- 00:08:36.998 [2024-07-14 21:09:33.651638] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.998 [2024-07-14 21:09:33.651671] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:36.998 #52 NEW cov: 10945 ft: 16831 corp: 7/55b lim: 9 exec/s: 52 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:36.998 [2024-07-14 21:09:33.826823] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:36.998 [2024-07-14 21:09:33.826855] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.257 #63 NEW cov: 10945 ft: 17033 corp: 8/64b lim: 9 exec/s: 63 rss: 71Mb L: 9/9 MS: 1 CrossOver- 00:08:37.257 [2024-07-14 21:09:33.992947] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.257 [2024-07-14 21:09:33.992978] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.257 #69 NEW cov: 10945 ft: 17207 corp: 9/73b lim: 9 exec/s: 69 rss: 71Mb L: 9/9 MS: 1 ChangeByte- 00:08:37.257 [2024-07-14 21:09:34.157547] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.257 [2024-07-14 21:09:34.157578] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.518 #70 NEW cov: 10952 ft: 17293 corp: 10/82b lim: 9 exec/s: 70 rss: 71Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:37.518 [2024-07-14 21:09:34.325715] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.518 [2024-07-14 21:09:34.325745] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.778 #71 NEW cov: 10952 ft: 17612 corp: 11/91b lim: 9 exec/s: 71 rss: 71Mb L: 9/9 MS: 1 ChangeByte- 00:08:37.778 [2024-07-14 21:09:34.490897] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.778 [2024-07-14 21:09:34.490926] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.778 #72 NEW cov: 10952 ft: 17974 corp: 12/100b lim: 9 exec/s: 36 rss: 71Mb L: 9/9 MS: 1 ChangeBit- 00:08:37.778 #72 DONE cov: 10952 ft: 17974 corp: 12/100b lim: 9 exec/s: 36 rss: 71Mb 00:08:37.778 Done 72 runs in 2 second(s) 00:08:37.778 [2024-07-14 21:09:34.610631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:38.038 21:09:34 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:38.038 21:09:34 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.038 21:09:34 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.038 21:09:34 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:38.038 00:08:38.038 real 0m19.171s 00:08:38.038 user 0m26.703s 00:08:38.038 sys 0m1.867s 00:08:38.038 21:09:34 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.038 21:09:34 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:38.038 ************************************ 00:08:38.038 END TEST vfio_fuzz 00:08:38.038 ************************************ 00:08:38.038 21:09:34 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:38.038 00:08:38.038 real 1m22.456s 00:08:38.038 user 2m5.824s 00:08:38.038 sys 0m9.684s 00:08:38.038 21:09:34 llvm_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.038 21:09:34 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:38.038 ************************************ 00:08:38.038 END TEST llvm_fuzz 00:08:38.038 ************************************ 00:08:38.038 21:09:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:08:38.038 21:09:34 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:08:38.038 21:09:34 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:08:38.038 21:09:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:38.038 21:09:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.038 21:09:34 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:08:38.038 21:09:34 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:08:38.038 21:09:34 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:08:38.038 21:09:34 -- common/autotest_common.sh@10 -- # set +x 00:08:44.610 INFO: APP EXITING 00:08:44.610 INFO: killing all VMs 00:08:44.610 INFO: killing vhost app 00:08:44.610 INFO: EXIT DONE 00:08:47.144 Waiting for block devices as requested 00:08:47.144 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:47.144 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:47.144 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:47.144 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:47.402 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:47.402 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:47.402 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:47.402 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:47.661 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:47.661 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:47.661 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:47.920 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:47.920 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:47.920 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:47.920 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:48.179 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:48.179 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:51.469 Cleaning 00:08:51.469 Removing: /dev/shm/spdk_tgt_trace.pid3998291 00:08:51.469 Removing: /var/run/dpdk/spdk_pid3995877 00:08:51.469 Removing: /var/run/dpdk/spdk_pid3996962 00:08:51.469 Removing: /var/run/dpdk/spdk_pid3998291 00:08:51.469 Removing: /var/run/dpdk/spdk_pid3998741 00:08:51.469 Removing: /var/run/dpdk/spdk_pid3999815 00:08:51.469 Removing: /var/run/dpdk/spdk_pid3999851 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4000960 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4000966 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4001370 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4001691 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4001794 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4002085 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4002403 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4002613 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4002759 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4003037 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4003889 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4006869 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4007115 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4007374 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4007438 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4007948 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4007958 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4008534 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4008688 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4009074 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4009086 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4009363 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4009383 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4009761 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4010039 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4010327 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4010563 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4010693 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4010733 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4011028 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4011292 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4011484 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4011677 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4011922 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4012208 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4012487 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4012774 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4013053 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4013334 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4013588 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4013779 00:08:51.469 Removing: /var/run/dpdk/spdk_pid4013971 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4014231 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4014516 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4014799 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4015088 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4015370 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4015654 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4015902 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4016136 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4016298 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4016565 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4017094 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4017515 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4017919 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4018448 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4018895 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4019277 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4019812 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4020238 00:08:51.728 Removing: /var/run/dpdk/spdk_pid4020638 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4021165 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4021604 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4022001 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4022529 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4022865 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4023354 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4023888 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4024174 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4024703 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4025108 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4025578 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4026187 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4026890 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4027441 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4027910 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4028252 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4028861 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4029399 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4029869 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4030229 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4030765 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4031298 00:08:51.729 Removing: /var/run/dpdk/spdk_pid4031693 00:08:51.729 Clean 00:08:51.729 21:09:48 -- common/autotest_common.sh@1447 -- # return 0 00:08:51.729 21:09:48 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:08:51.729 21:09:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.729 21:09:48 -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 21:09:48 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:08:51.729 21:09:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.729 21:09:48 -- common/autotest_common.sh@10 -- # set +x 00:08:51.988 21:09:48 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:51.988 21:09:48 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:51.988 21:09:48 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:51.988 21:09:48 -- spdk/autotest.sh@391 -- # hash lcov 00:08:51.988 21:09:48 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:51.988 21:09:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:51.988 21:09:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:51.988 21:09:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.988 21:09:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.988 21:09:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.988 21:09:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.988 21:09:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.988 21:09:48 -- paths/export.sh@5 -- $ export PATH 00:08:51.988 21:09:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.988 21:09:48 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:51.988 21:09:48 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:51.988 21:09:48 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720984188.XXXXXX 00:08:51.988 21:09:48 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720984188.uTvJPF 00:08:51.988 21:09:48 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:51.988 21:09:48 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:08:51.988 21:09:48 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:51.988 21:09:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:08:51.988 21:09:48 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:51.988 21:09:48 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:51.988 21:09:48 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:51.988 21:09:48 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:08:51.988 21:09:48 -- common/autotest_common.sh@10 -- $ set +x 00:08:51.988 21:09:48 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:08:51.988 21:09:48 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:51.988 21:09:48 -- pm/common@17 -- $ local monitor 00:08:51.988 21:09:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.989 21:09:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.989 21:09:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.989 21:09:48 -- pm/common@21 -- $ date +%s 00:08:51.989 21:09:48 -- pm/common@21 -- $ date +%s 00:08:51.989 21:09:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:51.989 21:09:48 -- pm/common@25 -- $ sleep 1 00:08:51.989 21:09:48 -- pm/common@21 -- $ date +%s 00:08:51.989 21:09:48 -- pm/common@21 -- $ date +%s 00:08:51.989 21:09:48 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720984188 00:08:51.989 21:09:48 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720984188 00:08:51.989 21:09:48 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720984188 00:08:51.989 21:09:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720984188 00:08:51.989 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720984188_collect-vmstat.pm.log 00:08:51.989 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720984188_collect-cpu-load.pm.log 00:08:51.989 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720984188_collect-cpu-temp.pm.log 00:08:51.989 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720984188_collect-bmc-pm.bmc.pm.log 00:08:52.926 21:09:49 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:52.926 21:09:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:08:52.926 21:09:49 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:52.926 21:09:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:52.926 21:09:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:08:52.926 21:09:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:52.926 21:09:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:52.926 21:09:49 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:52.926 21:09:49 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:52.926 21:09:49 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:53.186 21:09:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:53.186 21:09:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:53.186 21:09:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:53.186 21:09:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:53.186 21:09:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.186 21:09:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:53.186 21:09:49 -- pm/common@44 -- $ pid=4038638 00:08:53.186 21:09:49 -- pm/common@50 -- $ kill -TERM 4038638 00:08:53.186 21:09:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.186 21:09:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:53.186 21:09:49 -- pm/common@44 -- $ pid=4038641 00:08:53.186 21:09:49 -- pm/common@50 -- $ kill -TERM 4038641 00:08:53.186 21:09:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.186 21:09:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:53.186 21:09:49 -- pm/common@44 -- $ pid=4038644 00:08:53.186 21:09:49 -- pm/common@50 -- $ kill -TERM 4038644 00:08:53.186 21:09:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.186 21:09:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:53.186 21:09:49 -- pm/common@44 -- $ pid=4038677 00:08:53.186 21:09:49 -- pm/common@50 -- $ sudo -E kill -TERM 4038677 00:08:53.186 + [[ -n 3876875 ]] 00:08:53.186 + sudo kill 3876875 00:08:53.196 [Pipeline] } 00:08:53.217 [Pipeline] // stage 00:08:53.224 [Pipeline] } 00:08:53.244 [Pipeline] // timeout 00:08:53.251 [Pipeline] } 00:08:53.269 [Pipeline] // catchError 00:08:53.275 [Pipeline] } 00:08:53.295 [Pipeline] // wrap 00:08:53.302 [Pipeline] } 00:08:53.319 [Pipeline] // catchError 00:08:53.329 [Pipeline] stage 00:08:53.332 [Pipeline] { (Epilogue) 00:08:53.348 [Pipeline] catchError 00:08:53.351 [Pipeline] { 00:08:53.366 [Pipeline] echo 00:08:53.368 Cleanup processes 00:08:53.375 [Pipeline] sh 00:08:53.661 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:53.661 3951772 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720983866 00:08:53.661 3951805 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720983866 00:08:53.661 4038815 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:53.661 4039681 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:53.677 [Pipeline] sh 00:08:53.962 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:53.962 ++ grep -v 'sudo pgrep' 00:08:53.962 ++ awk '{print $1}' 00:08:53.962 + sudo kill -9 3951772 3951805 4038815 00:08:53.974 [Pipeline] sh 00:08:54.258 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:54.258 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:54.258 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:55.648 [Pipeline] sh 00:08:55.930 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:55.930 Artifacts sizes are good 00:08:55.945 [Pipeline] archiveArtifacts 00:08:55.953 Archiving artifacts 00:08:56.004 [Pipeline] sh 00:08:56.337 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:56.350 [Pipeline] cleanWs 00:08:56.359 [WS-CLEANUP] Deleting project workspace... 00:08:56.359 [WS-CLEANUP] Deferred wipeout is used... 00:08:56.364 [WS-CLEANUP] done 00:08:56.366 [Pipeline] } 00:08:56.387 [Pipeline] // catchError 00:08:56.396 [Pipeline] sh 00:08:56.672 + logger -p user.info -t JENKINS-CI 00:08:56.681 [Pipeline] } 00:08:56.697 [Pipeline] // stage 00:08:56.703 [Pipeline] } 00:08:56.716 [Pipeline] // node 00:08:56.719 [Pipeline] End of Pipeline 00:08:56.824 Finished: SUCCESS