00:00:00.001 Started by upstream project "autotest-per-patch" build number 126186 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23945 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.029 Fetching changes from the remote Git repository 00:00:00.032 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.054 Using shallow fetch with depth 1 00:00:00.054 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.054 > git --version # timeout=10 00:00:00.078 > git --version # 'git version 2.39.2' 00:00:00.078 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.117 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.117 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/23 # timeout=5 00:00:02.662 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.673 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.690 Checking out Revision b36476c4eef2004836014399fbf414610d5aa128 (FETCH_HEAD) 00:00:02.690 > git config core.sparsecheckout # timeout=10 00:00:02.724 > git read-tree -mu HEAD # timeout=10 00:00:02.742 > git checkout -f b36476c4eef2004836014399fbf414610d5aa128 # timeout=5 00:00:02.768 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch" 00:00:02.768 > git rev-list --no-walk 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=10 00:00:02.863 [Pipeline] Start of Pipeline 00:00:02.884 [Pipeline] library 00:00:02.886 Loading library shm_lib@master 00:00:02.886 Library shm_lib@master is cached. Copying from home. 00:00:02.905 [Pipeline] node 00:00:02.919 Running on WFP29 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.921 [Pipeline] { 00:00:02.932 [Pipeline] catchError 00:00:02.933 [Pipeline] { 00:00:02.945 [Pipeline] wrap 00:00:02.954 [Pipeline] { 00:00:02.962 [Pipeline] stage 00:00:02.963 [Pipeline] { (Prologue) 00:00:03.139 [Pipeline] sh 00:00:03.422 + logger -p user.info -t JENKINS-CI 00:00:03.439 [Pipeline] echo 00:00:03.440 Node: WFP29 00:00:03.445 [Pipeline] sh 00:00:03.739 [Pipeline] setCustomBuildProperty 00:00:03.750 [Pipeline] echo 00:00:03.752 Cleanup processes 00:00:03.757 [Pipeline] sh 00:00:04.044 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.044 2722637 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.057 [Pipeline] sh 00:00:04.342 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.342 ++ grep -v 'sudo pgrep' 00:00:04.342 ++ awk '{print $1}' 00:00:04.342 + sudo kill -9 00:00:04.342 + true 00:00:04.358 [Pipeline] cleanWs 00:00:04.368 [WS-CLEANUP] Deleting project workspace... 00:00:04.368 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.373 [WS-CLEANUP] done 00:00:04.383 [Pipeline] setCustomBuildProperty 00:00:04.397 [Pipeline] sh 00:00:04.677 + sudo git config --global --replace-all safe.directory '*' 00:00:04.767 [Pipeline] httpRequest 00:00:04.786 [Pipeline] echo 00:00:04.787 Sorcerer 10.211.164.101 is alive 00:00:04.796 [Pipeline] httpRequest 00:00:04.801 HttpMethod: GET 00:00:04.801 URL: http://10.211.164.101/packages/jbp_b36476c4eef2004836014399fbf414610d5aa128.tar.gz 00:00:04.802 Sending request to url: http://10.211.164.101/packages/jbp_b36476c4eef2004836014399fbf414610d5aa128.tar.gz 00:00:04.804 Response Code: HTTP/1.1 200 OK 00:00:04.805 Success: Status code 200 is in the accepted range: 200,404 00:00:04.805 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_b36476c4eef2004836014399fbf414610d5aa128.tar.gz 00:00:04.951 [Pipeline] sh 00:00:05.231 + tar --no-same-owner -xf jbp_b36476c4eef2004836014399fbf414610d5aa128.tar.gz 00:00:05.245 [Pipeline] httpRequest 00:00:05.263 [Pipeline] echo 00:00:05.265 Sorcerer 10.211.164.101 is alive 00:00:05.272 [Pipeline] httpRequest 00:00:05.275 HttpMethod: GET 00:00:05.276 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:05.276 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:05.279 Response Code: HTTP/1.1 200 OK 00:00:05.279 Success: Status code 200 is in the accepted range: 200,404 00:00:05.280 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:24.988 [Pipeline] sh 00:00:25.276 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:27.874 [Pipeline] sh 00:00:28.157 + git -C spdk log --oneline -n5 00:00:28.157 2728651ee accel: adjust task per ch define name 00:00:28.157 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:28.157 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:28.157 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:28.157 719d03c6a sock/uring: only register net impl if supported 00:00:28.167 [Pipeline] } 00:00:28.196 [Pipeline] // stage 00:00:28.202 [Pipeline] stage 00:00:28.204 [Pipeline] { (Prepare) 00:00:28.215 [Pipeline] writeFile 00:00:28.227 [Pipeline] sh 00:00:28.505 + logger -p user.info -t JENKINS-CI 00:00:28.516 [Pipeline] sh 00:00:28.800 + logger -p user.info -t JENKINS-CI 00:00:28.812 [Pipeline] sh 00:00:29.094 + cat autorun-spdk.conf 00:00:29.094 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.094 SPDK_TEST_FUZZER_SHORT=1 00:00:29.094 SPDK_TEST_FUZZER=1 00:00:29.095 SPDK_RUN_UBSAN=1 00:00:29.101 RUN_NIGHTLY=0 00:00:29.109 [Pipeline] readFile 00:00:29.142 [Pipeline] withEnv 00:00:29.145 [Pipeline] { 00:00:29.161 [Pipeline] sh 00:00:29.444 + set -ex 00:00:29.444 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:29.444 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:29.444 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.444 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:29.444 ++ SPDK_TEST_FUZZER=1 00:00:29.444 ++ SPDK_RUN_UBSAN=1 00:00:29.444 ++ RUN_NIGHTLY=0 00:00:29.444 + case $SPDK_TEST_NVMF_NICS in 00:00:29.444 + DRIVERS= 00:00:29.444 + [[ -n '' ]] 00:00:29.444 + exit 0 00:00:29.455 [Pipeline] } 00:00:29.474 [Pipeline] // withEnv 00:00:29.480 [Pipeline] } 00:00:29.496 [Pipeline] // stage 00:00:29.504 [Pipeline] catchError 00:00:29.506 [Pipeline] { 00:00:29.517 [Pipeline] timeout 00:00:29.517 Timeout set to expire in 30 min 00:00:29.518 [Pipeline] { 00:00:29.531 [Pipeline] stage 00:00:29.534 [Pipeline] { (Tests) 00:00:29.551 [Pipeline] sh 00:00:29.834 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.835 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.835 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.835 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:29.835 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:29.835 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.835 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:29.835 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.835 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.835 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.835 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:29.835 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.835 + source /etc/os-release 00:00:29.835 ++ NAME='Fedora Linux' 00:00:29.835 ++ VERSION='38 (Cloud Edition)' 00:00:29.835 ++ ID=fedora 00:00:29.835 ++ VERSION_ID=38 00:00:29.835 ++ VERSION_CODENAME= 00:00:29.835 ++ PLATFORM_ID=platform:f38 00:00:29.835 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.835 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.835 ++ LOGO=fedora-logo-icon 00:00:29.835 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.835 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.835 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.835 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.835 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.835 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.835 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.835 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.835 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.835 ++ SUPPORT_END=2024-05-14 00:00:29.835 ++ VARIANT='Cloud Edition' 00:00:29.835 ++ VARIANT_ID=cloud 00:00:29.835 + uname -a 00:00:29.835 Linux spdk-wfp-29 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.835 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:33.122 Hugepages 00:00:33.122 node hugesize free / total 00:00:33.122 node0 1048576kB 0 / 0 00:00:33.122 node0 2048kB 0 / 0 00:00:33.122 node1 1048576kB 0 / 0 00:00:33.122 node1 2048kB 0 / 0 00:00:33.122 00:00:33.122 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.122 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:33.122 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:33.122 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:33.122 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:33.122 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:33.122 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:33.122 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:33.122 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:33.122 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:33.122 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:33.122 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:33.122 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:33.122 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:33.122 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:33.122 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:33.122 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:33.122 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:33.122 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:00:33.380 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:00:33.380 + rm -f /tmp/spdk-ld-path 00:00:33.380 + source autorun-spdk.conf 00:00:33.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.380 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:33.380 ++ SPDK_TEST_FUZZER=1 00:00:33.380 ++ SPDK_RUN_UBSAN=1 00:00:33.380 ++ RUN_NIGHTLY=0 00:00:33.380 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.380 + [[ -n '' ]] 00:00:33.380 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:33.380 + for M in /var/spdk/build-*-manifest.txt 00:00:33.380 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.380 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:33.380 + for M in /var/spdk/build-*-manifest.txt 00:00:33.380 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.380 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:33.380 ++ uname 00:00:33.380 + [[ Linux == \L\i\n\u\x ]] 00:00:33.380 + sudo dmesg -T 00:00:33.380 + sudo dmesg --clear 00:00:33.380 + dmesg_pid=2723604 00:00:33.380 + [[ Fedora Linux == FreeBSD ]] 00:00:33.380 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.380 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.380 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.380 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:33.380 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:33.380 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.380 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.380 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.380 + sudo dmesg -Tw 00:00:33.380 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.380 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.380 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.380 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.380 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.380 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.380 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.380 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.380 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:33.380 Test configuration: 00:00:33.380 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.380 SPDK_TEST_FUZZER_SHORT=1 00:00:33.380 SPDK_TEST_FUZZER=1 00:00:33.380 SPDK_RUN_UBSAN=1 00:00:33.639 RUN_NIGHTLY=0 13:46:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:33.639 13:46:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.639 13:46:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.639 13:46:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.639 13:46:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.639 13:46:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.639 13:46:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.639 13:46:11 -- paths/export.sh@5 -- $ export PATH 00:00:33.639 13:46:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.639 13:46:11 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:33.639 13:46:11 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:33.639 13:46:11 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721043971.XXXXXX 00:00:33.639 13:46:11 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721043971.MaI26q 00:00:33.639 13:46:11 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:33.639 13:46:11 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:33.639 13:46:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:33.639 13:46:11 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.639 13:46:11 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.639 13:46:11 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:33.639 13:46:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:33.639 13:46:11 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.639 13:46:11 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.639 13:46:11 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:33.639 13:46:11 -- pm/common@17 -- $ local monitor 00:00:33.639 13:46:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.639 13:46:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.639 13:46:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.639 13:46:11 -- pm/common@21 -- $ date +%s 00:00:33.639 13:46:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.639 13:46:11 -- pm/common@21 -- $ date +%s 00:00:33.639 13:46:11 -- pm/common@25 -- $ sleep 1 00:00:33.639 13:46:11 -- pm/common@21 -- $ date +%s 00:00:33.639 13:46:11 -- pm/common@21 -- $ date +%s 00:00:33.639 13:46:11 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043971 00:00:33.639 13:46:11 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043971 00:00:33.639 13:46:11 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043971 00:00:33.639 13:46:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043971 00:00:33.639 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043971_collect-vmstat.pm.log 00:00:33.639 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043971_collect-cpu-load.pm.log 00:00:33.639 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043971_collect-cpu-temp.pm.log 00:00:33.639 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043971_collect-bmc-pm.bmc.pm.log 00:00:34.573 13:46:12 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:34.573 13:46:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.573 13:46:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.573 13:46:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:34.573 13:46:12 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.573 Mon Jul 15 11:46:12 AM UTC 2024 00:00:34.573 13:46:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.573 v24.09-pre-206-g2728651ee 00:00:34.573 13:46:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.573 13:46:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.573 13:46:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.573 13:46:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:34.573 13:46:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:34.573 13:46:12 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.573 ************************************ 00:00:34.573 START TEST ubsan 00:00:34.573 ************************************ 00:00:34.573 13:46:12 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:34.573 using ubsan 00:00:34.573 00:00:34.573 real 0m0.001s 00:00:34.573 user 0m0.000s 00:00:34.573 sys 0m0.000s 00:00:34.573 13:46:12 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:34.573 13:46:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.573 ************************************ 00:00:34.573 END TEST ubsan 00:00:34.573 ************************************ 00:00:34.831 13:46:12 -- common/autotest_common.sh@1142 -- $ return 0 00:00:34.832 13:46:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.832 13:46:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.832 13:46:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.832 13:46:12 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:34.832 13:46:12 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:34.832 13:46:12 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:34.832 13:46:12 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:34.832 13:46:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:34.832 13:46:12 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.832 ************************************ 00:00:34.832 START TEST autobuild_llvm_precompile 00:00:34.832 ************************************ 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:34.832 Target: x86_64-redhat-linux-gnu 00:00:34.832 Thread model: posix 00:00:34.832 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:34.832 13:46:12 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:35.090 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:35.090 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:35.656 Using 'verbs' RDMA provider 00:00:51.477 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.364 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.364 Creating mk/config.mk...done. 00:01:06.364 Creating mk/cc.flags.mk...done. 00:01:06.364 Type 'make' to build. 00:01:06.364 00:01:06.364 real 0m30.404s 00:01:06.364 user 0m13.039s 00:01:06.364 sys 0m16.844s 00:01:06.364 13:46:43 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:06.364 13:46:43 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:06.364 ************************************ 00:01:06.364 END TEST autobuild_llvm_precompile 00:01:06.364 ************************************ 00:01:06.364 13:46:43 -- common/autotest_common.sh@1142 -- $ return 0 00:01:06.364 13:46:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:06.364 13:46:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:06.364 13:46:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:06.364 13:46:43 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:06.364 13:46:43 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:06.364 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:06.364 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:06.364 Using 'verbs' RDMA provider 00:01:19.147 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:31.445 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:31.704 Creating mk/config.mk...done. 00:01:31.704 Creating mk/cc.flags.mk...done. 00:01:31.704 Type 'make' to build. 00:01:31.704 13:47:09 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:31.704 13:47:09 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:31.704 13:47:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.704 13:47:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.704 ************************************ 00:01:31.704 START TEST make 00:01:31.704 ************************************ 00:01:31.704 13:47:09 make -- common/autotest_common.sh@1123 -- $ make -j72 00:01:31.964 make[1]: Nothing to be done for 'all'. 00:01:33.868 The Meson build system 00:01:33.868 Version: 1.3.1 00:01:33.868 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:33.868 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.868 Build type: native build 00:01:33.868 Project name: libvfio-user 00:01:33.868 Project version: 0.0.1 00:01:33.868 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:33.868 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:33.868 Host machine cpu family: x86_64 00:01:33.868 Host machine cpu: x86_64 00:01:33.868 Run-time dependency threads found: YES 00:01:33.868 Library dl found: YES 00:01:33.868 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.868 Run-time dependency json-c found: YES 0.17 00:01:33.868 Run-time dependency cmocka found: YES 1.1.7 00:01:33.868 Program pytest-3 found: NO 00:01:33.868 Program flake8 found: NO 00:01:33.868 Program misspell-fixer found: NO 00:01:33.868 Program restructuredtext-lint found: NO 00:01:33.868 Program valgrind found: YES (/usr/bin/valgrind) 00:01:33.868 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.868 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.868 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.868 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:33.868 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:33.868 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:33.868 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:33.868 Build targets in project: 8 00:01:33.868 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:33.868 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:33.868 00:01:33.868 libvfio-user 0.0.1 00:01:33.868 00:01:33.868 User defined options 00:01:33.868 buildtype : debug 00:01:33.868 default_library: static 00:01:33.868 libdir : /usr/local/lib 00:01:33.868 00:01:33.868 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.125 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:34.125 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:34.125 [2/36] Compiling C object samples/null.p/null.c.o 00:01:34.125 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:34.125 [4/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:34.125 [5/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:34.125 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:34.125 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:34.125 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:34.125 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:34.125 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:34.125 [11/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:34.125 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:34.125 [13/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:34.125 [14/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:34.125 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:34.125 [16/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:34.125 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:34.125 [18/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:34.125 [19/36] Compiling C object samples/server.p/server.c.o 00:01:34.125 [20/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:34.125 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:34.125 [22/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:34.125 [23/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:34.383 [24/36] Compiling C object samples/client.p/client.c.o 00:01:34.383 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:34.383 [26/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:34.383 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:34.383 [28/36] Linking target samples/client 00:01:34.383 [29/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:34.383 [30/36] Linking target test/unit_tests 00:01:34.383 [31/36] Linking static target lib/libvfio-user.a 00:01:34.383 [32/36] Linking target samples/gpio-pci-idio-16 00:01:34.383 [33/36] Linking target samples/lspci 00:01:34.383 [34/36] Linking target samples/server 00:01:34.383 [35/36] Linking target samples/null 00:01:34.383 [36/36] Linking target samples/shadow_ioeventfd_server 00:01:34.383 INFO: autodetecting backend as ninja 00:01:34.383 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.383 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.948 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:34.948 ninja: no work to do. 00:01:41.515 The Meson build system 00:01:41.515 Version: 1.3.1 00:01:41.515 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:41.515 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:41.515 Build type: native build 00:01:41.515 Program cat found: YES (/usr/bin/cat) 00:01:41.515 Project name: DPDK 00:01:41.515 Project version: 24.03.0 00:01:41.515 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:41.515 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:41.515 Host machine cpu family: x86_64 00:01:41.515 Host machine cpu: x86_64 00:01:41.515 Message: ## Building in Developer Mode ## 00:01:41.515 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:41.515 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:41.515 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:41.515 Program python3 found: YES (/usr/bin/python3) 00:01:41.515 Program cat found: YES (/usr/bin/cat) 00:01:41.515 Compiler for C supports arguments -march=native: YES 00:01:41.515 Checking for size of "void *" : 8 00:01:41.515 Checking for size of "void *" : 8 (cached) 00:01:41.515 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:41.515 Library m found: YES 00:01:41.515 Library numa found: YES 00:01:41.515 Has header "numaif.h" : YES 00:01:41.515 Library fdt found: NO 00:01:41.515 Library execinfo found: NO 00:01:41.515 Has header "execinfo.h" : YES 00:01:41.515 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.515 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:41.515 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:41.515 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:41.515 Run-time dependency openssl found: YES 3.0.9 00:01:41.515 Run-time dependency libpcap found: YES 1.10.4 00:01:41.515 Has header "pcap.h" with dependency libpcap: YES 00:01:41.515 Compiler for C supports arguments -Wcast-qual: YES 00:01:41.515 Compiler for C supports arguments -Wdeprecated: YES 00:01:41.515 Compiler for C supports arguments -Wformat: YES 00:01:41.515 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:41.515 Compiler for C supports arguments -Wformat-security: YES 00:01:41.515 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.515 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:41.515 Compiler for C supports arguments -Wnested-externs: YES 00:01:41.515 Compiler for C supports arguments -Wold-style-definition: YES 00:01:41.515 Compiler for C supports arguments -Wpointer-arith: YES 00:01:41.515 Compiler for C supports arguments -Wsign-compare: YES 00:01:41.515 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:41.515 Compiler for C supports arguments -Wundef: YES 00:01:41.515 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.515 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:41.515 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:41.515 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.515 Program objdump found: YES (/usr/bin/objdump) 00:01:41.515 Compiler for C supports arguments -mavx512f: YES 00:01:41.515 Checking if "AVX512 checking" compiles: YES 00:01:41.515 Fetching value of define "__SSE4_2__" : 1 00:01:41.515 Fetching value of define "__AES__" : 1 00:01:41.515 Fetching value of define "__AVX__" : 1 00:01:41.515 Fetching value of define "__AVX2__" : 1 00:01:41.515 Fetching value of define "__AVX512BW__" : 1 00:01:41.515 Fetching value of define "__AVX512CD__" : 1 00:01:41.515 Fetching value of define "__AVX512DQ__" : 1 00:01:41.515 Fetching value of define "__AVX512F__" : 1 00:01:41.515 Fetching value of define "__AVX512VL__" : 1 00:01:41.515 Fetching value of define "__PCLMUL__" : 1 00:01:41.515 Fetching value of define "__RDRND__" : 1 00:01:41.515 Fetching value of define "__RDSEED__" : 1 00:01:41.515 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:41.515 Fetching value of define "__znver1__" : (undefined) 00:01:41.515 Fetching value of define "__znver2__" : (undefined) 00:01:41.515 Fetching value of define "__znver3__" : (undefined) 00:01:41.515 Fetching value of define "__znver4__" : (undefined) 00:01:41.515 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:41.515 Message: lib/log: Defining dependency "log" 00:01:41.515 Message: lib/kvargs: Defining dependency "kvargs" 00:01:41.515 Message: lib/telemetry: Defining dependency "telemetry" 00:01:41.515 Checking for function "getentropy" : NO 00:01:41.515 Message: lib/eal: Defining dependency "eal" 00:01:41.515 Message: lib/ring: Defining dependency "ring" 00:01:41.515 Message: lib/rcu: Defining dependency "rcu" 00:01:41.515 Message: lib/mempool: Defining dependency "mempool" 00:01:41.515 Message: lib/mbuf: Defining dependency "mbuf" 00:01:41.515 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:41.515 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:41.515 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:41.515 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:41.515 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:41.515 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:41.515 Compiler for C supports arguments -mpclmul: YES 00:01:41.515 Compiler for C supports arguments -maes: YES 00:01:41.515 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.515 Compiler for C supports arguments -mavx512bw: YES 00:01:41.515 Compiler for C supports arguments -mavx512dq: YES 00:01:41.515 Compiler for C supports arguments -mavx512vl: YES 00:01:41.515 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:41.515 Compiler for C supports arguments -mavx2: YES 00:01:41.515 Compiler for C supports arguments -mavx: YES 00:01:41.515 Message: lib/net: Defining dependency "net" 00:01:41.515 Message: lib/meter: Defining dependency "meter" 00:01:41.515 Message: lib/ethdev: Defining dependency "ethdev" 00:01:41.515 Message: lib/pci: Defining dependency "pci" 00:01:41.516 Message: lib/cmdline: Defining dependency "cmdline" 00:01:41.516 Message: lib/hash: Defining dependency "hash" 00:01:41.516 Message: lib/timer: Defining dependency "timer" 00:01:41.516 Message: lib/compressdev: Defining dependency "compressdev" 00:01:41.516 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:41.516 Message: lib/dmadev: Defining dependency "dmadev" 00:01:41.516 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:41.516 Message: lib/power: Defining dependency "power" 00:01:41.516 Message: lib/reorder: Defining dependency "reorder" 00:01:41.516 Message: lib/security: Defining dependency "security" 00:01:41.516 Has header "linux/userfaultfd.h" : YES 00:01:41.516 Has header "linux/vduse.h" : YES 00:01:41.516 Message: lib/vhost: Defining dependency "vhost" 00:01:41.516 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:41.516 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:41.516 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:41.516 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:41.516 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:41.516 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:41.516 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:41.516 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:41.516 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:41.516 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:41.516 Program doxygen found: YES (/usr/bin/doxygen) 00:01:41.516 Configuring doxy-api-html.conf using configuration 00:01:41.516 Configuring doxy-api-man.conf using configuration 00:01:41.516 Program mandb found: YES (/usr/bin/mandb) 00:01:41.516 Program sphinx-build found: NO 00:01:41.516 Configuring rte_build_config.h using configuration 00:01:41.516 Message: 00:01:41.516 ================= 00:01:41.516 Applications Enabled 00:01:41.516 ================= 00:01:41.516 00:01:41.516 apps: 00:01:41.516 00:01:41.516 00:01:41.516 Message: 00:01:41.516 ================= 00:01:41.516 Libraries Enabled 00:01:41.516 ================= 00:01:41.516 00:01:41.516 libs: 00:01:41.516 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:41.516 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:41.516 cryptodev, dmadev, power, reorder, security, vhost, 00:01:41.516 00:01:41.516 Message: 00:01:41.516 =============== 00:01:41.516 Drivers Enabled 00:01:41.516 =============== 00:01:41.516 00:01:41.516 common: 00:01:41.516 00:01:41.516 bus: 00:01:41.516 pci, vdev, 00:01:41.516 mempool: 00:01:41.516 ring, 00:01:41.516 dma: 00:01:41.516 00:01:41.516 net: 00:01:41.516 00:01:41.516 crypto: 00:01:41.516 00:01:41.516 compress: 00:01:41.516 00:01:41.516 vdpa: 00:01:41.516 00:01:41.516 00:01:41.516 Message: 00:01:41.516 ================= 00:01:41.516 Content Skipped 00:01:41.516 ================= 00:01:41.516 00:01:41.516 apps: 00:01:41.516 dumpcap: explicitly disabled via build config 00:01:41.516 graph: explicitly disabled via build config 00:01:41.516 pdump: explicitly disabled via build config 00:01:41.516 proc-info: explicitly disabled via build config 00:01:41.516 test-acl: explicitly disabled via build config 00:01:41.516 test-bbdev: explicitly disabled via build config 00:01:41.516 test-cmdline: explicitly disabled via build config 00:01:41.516 test-compress-perf: explicitly disabled via build config 00:01:41.516 test-crypto-perf: explicitly disabled via build config 00:01:41.516 test-dma-perf: explicitly disabled via build config 00:01:41.516 test-eventdev: explicitly disabled via build config 00:01:41.516 test-fib: explicitly disabled via build config 00:01:41.516 test-flow-perf: explicitly disabled via build config 00:01:41.516 test-gpudev: explicitly disabled via build config 00:01:41.516 test-mldev: explicitly disabled via build config 00:01:41.516 test-pipeline: explicitly disabled via build config 00:01:41.516 test-pmd: explicitly disabled via build config 00:01:41.516 test-regex: explicitly disabled via build config 00:01:41.516 test-sad: explicitly disabled via build config 00:01:41.516 test-security-perf: explicitly disabled via build config 00:01:41.516 00:01:41.516 libs: 00:01:41.516 argparse: explicitly disabled via build config 00:01:41.516 metrics: explicitly disabled via build config 00:01:41.516 acl: explicitly disabled via build config 00:01:41.516 bbdev: explicitly disabled via build config 00:01:41.516 bitratestats: explicitly disabled via build config 00:01:41.516 bpf: explicitly disabled via build config 00:01:41.516 cfgfile: explicitly disabled via build config 00:01:41.516 distributor: explicitly disabled via build config 00:01:41.516 efd: explicitly disabled via build config 00:01:41.516 eventdev: explicitly disabled via build config 00:01:41.516 dispatcher: explicitly disabled via build config 00:01:41.516 gpudev: explicitly disabled via build config 00:01:41.516 gro: explicitly disabled via build config 00:01:41.516 gso: explicitly disabled via build config 00:01:41.516 ip_frag: explicitly disabled via build config 00:01:41.516 jobstats: explicitly disabled via build config 00:01:41.516 latencystats: explicitly disabled via build config 00:01:41.516 lpm: explicitly disabled via build config 00:01:41.516 member: explicitly disabled via build config 00:01:41.516 pcapng: explicitly disabled via build config 00:01:41.516 rawdev: explicitly disabled via build config 00:01:41.516 regexdev: explicitly disabled via build config 00:01:41.516 mldev: explicitly disabled via build config 00:01:41.516 rib: explicitly disabled via build config 00:01:41.516 sched: explicitly disabled via build config 00:01:41.516 stack: explicitly disabled via build config 00:01:41.516 ipsec: explicitly disabled via build config 00:01:41.516 pdcp: explicitly disabled via build config 00:01:41.516 fib: explicitly disabled via build config 00:01:41.516 port: explicitly disabled via build config 00:01:41.516 pdump: explicitly disabled via build config 00:01:41.516 table: explicitly disabled via build config 00:01:41.516 pipeline: explicitly disabled via build config 00:01:41.516 graph: explicitly disabled via build config 00:01:41.516 node: explicitly disabled via build config 00:01:41.516 00:01:41.516 drivers: 00:01:41.516 common/cpt: not in enabled drivers build config 00:01:41.516 common/dpaax: not in enabled drivers build config 00:01:41.516 common/iavf: not in enabled drivers build config 00:01:41.516 common/idpf: not in enabled drivers build config 00:01:41.516 common/ionic: not in enabled drivers build config 00:01:41.516 common/mvep: not in enabled drivers build config 00:01:41.516 common/octeontx: not in enabled drivers build config 00:01:41.516 bus/auxiliary: not in enabled drivers build config 00:01:41.516 bus/cdx: not in enabled drivers build config 00:01:41.516 bus/dpaa: not in enabled drivers build config 00:01:41.516 bus/fslmc: not in enabled drivers build config 00:01:41.516 bus/ifpga: not in enabled drivers build config 00:01:41.516 bus/platform: not in enabled drivers build config 00:01:41.516 bus/uacce: not in enabled drivers build config 00:01:41.516 bus/vmbus: not in enabled drivers build config 00:01:41.516 common/cnxk: not in enabled drivers build config 00:01:41.516 common/mlx5: not in enabled drivers build config 00:01:41.516 common/nfp: not in enabled drivers build config 00:01:41.516 common/nitrox: not in enabled drivers build config 00:01:41.516 common/qat: not in enabled drivers build config 00:01:41.516 common/sfc_efx: not in enabled drivers build config 00:01:41.516 mempool/bucket: not in enabled drivers build config 00:01:41.516 mempool/cnxk: not in enabled drivers build config 00:01:41.516 mempool/dpaa: not in enabled drivers build config 00:01:41.516 mempool/dpaa2: not in enabled drivers build config 00:01:41.516 mempool/octeontx: not in enabled drivers build config 00:01:41.516 mempool/stack: not in enabled drivers build config 00:01:41.516 dma/cnxk: not in enabled drivers build config 00:01:41.516 dma/dpaa: not in enabled drivers build config 00:01:41.516 dma/dpaa2: not in enabled drivers build config 00:01:41.516 dma/hisilicon: not in enabled drivers build config 00:01:41.516 dma/idxd: not in enabled drivers build config 00:01:41.516 dma/ioat: not in enabled drivers build config 00:01:41.516 dma/skeleton: not in enabled drivers build config 00:01:41.516 net/af_packet: not in enabled drivers build config 00:01:41.516 net/af_xdp: not in enabled drivers build config 00:01:41.516 net/ark: not in enabled drivers build config 00:01:41.516 net/atlantic: not in enabled drivers build config 00:01:41.516 net/avp: not in enabled drivers build config 00:01:41.516 net/axgbe: not in enabled drivers build config 00:01:41.516 net/bnx2x: not in enabled drivers build config 00:01:41.516 net/bnxt: not in enabled drivers build config 00:01:41.516 net/bonding: not in enabled drivers build config 00:01:41.516 net/cnxk: not in enabled drivers build config 00:01:41.516 net/cpfl: not in enabled drivers build config 00:01:41.516 net/cxgbe: not in enabled drivers build config 00:01:41.516 net/dpaa: not in enabled drivers build config 00:01:41.516 net/dpaa2: not in enabled drivers build config 00:01:41.516 net/e1000: not in enabled drivers build config 00:01:41.516 net/ena: not in enabled drivers build config 00:01:41.516 net/enetc: not in enabled drivers build config 00:01:41.516 net/enetfec: not in enabled drivers build config 00:01:41.516 net/enic: not in enabled drivers build config 00:01:41.516 net/failsafe: not in enabled drivers build config 00:01:41.516 net/fm10k: not in enabled drivers build config 00:01:41.516 net/gve: not in enabled drivers build config 00:01:41.516 net/hinic: not in enabled drivers build config 00:01:41.516 net/hns3: not in enabled drivers build config 00:01:41.516 net/i40e: not in enabled drivers build config 00:01:41.516 net/iavf: not in enabled drivers build config 00:01:41.516 net/ice: not in enabled drivers build config 00:01:41.516 net/idpf: not in enabled drivers build config 00:01:41.516 net/igc: not in enabled drivers build config 00:01:41.516 net/ionic: not in enabled drivers build config 00:01:41.516 net/ipn3ke: not in enabled drivers build config 00:01:41.516 net/ixgbe: not in enabled drivers build config 00:01:41.516 net/mana: not in enabled drivers build config 00:01:41.516 net/memif: not in enabled drivers build config 00:01:41.516 net/mlx4: not in enabled drivers build config 00:01:41.516 net/mlx5: not in enabled drivers build config 00:01:41.516 net/mvneta: not in enabled drivers build config 00:01:41.516 net/mvpp2: not in enabled drivers build config 00:01:41.516 net/netvsc: not in enabled drivers build config 00:01:41.516 net/nfb: not in enabled drivers build config 00:01:41.516 net/nfp: not in enabled drivers build config 00:01:41.516 net/ngbe: not in enabled drivers build config 00:01:41.517 net/null: not in enabled drivers build config 00:01:41.517 net/octeontx: not in enabled drivers build config 00:01:41.517 net/octeon_ep: not in enabled drivers build config 00:01:41.517 net/pcap: not in enabled drivers build config 00:01:41.517 net/pfe: not in enabled drivers build config 00:01:41.517 net/qede: not in enabled drivers build config 00:01:41.517 net/ring: not in enabled drivers build config 00:01:41.517 net/sfc: not in enabled drivers build config 00:01:41.517 net/softnic: not in enabled drivers build config 00:01:41.517 net/tap: not in enabled drivers build config 00:01:41.517 net/thunderx: not in enabled drivers build config 00:01:41.517 net/txgbe: not in enabled drivers build config 00:01:41.517 net/vdev_netvsc: not in enabled drivers build config 00:01:41.517 net/vhost: not in enabled drivers build config 00:01:41.517 net/virtio: not in enabled drivers build config 00:01:41.517 net/vmxnet3: not in enabled drivers build config 00:01:41.517 raw/*: missing internal dependency, "rawdev" 00:01:41.517 crypto/armv8: not in enabled drivers build config 00:01:41.517 crypto/bcmfs: not in enabled drivers build config 00:01:41.517 crypto/caam_jr: not in enabled drivers build config 00:01:41.517 crypto/ccp: not in enabled drivers build config 00:01:41.517 crypto/cnxk: not in enabled drivers build config 00:01:41.517 crypto/dpaa_sec: not in enabled drivers build config 00:01:41.517 crypto/dpaa2_sec: not in enabled drivers build config 00:01:41.517 crypto/ipsec_mb: not in enabled drivers build config 00:01:41.517 crypto/mlx5: not in enabled drivers build config 00:01:41.517 crypto/mvsam: not in enabled drivers build config 00:01:41.517 crypto/nitrox: not in enabled drivers build config 00:01:41.517 crypto/null: not in enabled drivers build config 00:01:41.517 crypto/octeontx: not in enabled drivers build config 00:01:41.517 crypto/openssl: not in enabled drivers build config 00:01:41.517 crypto/scheduler: not in enabled drivers build config 00:01:41.517 crypto/uadk: not in enabled drivers build config 00:01:41.517 crypto/virtio: not in enabled drivers build config 00:01:41.517 compress/isal: not in enabled drivers build config 00:01:41.517 compress/mlx5: not in enabled drivers build config 00:01:41.517 compress/nitrox: not in enabled drivers build config 00:01:41.517 compress/octeontx: not in enabled drivers build config 00:01:41.517 compress/zlib: not in enabled drivers build config 00:01:41.517 regex/*: missing internal dependency, "regexdev" 00:01:41.517 ml/*: missing internal dependency, "mldev" 00:01:41.517 vdpa/ifc: not in enabled drivers build config 00:01:41.517 vdpa/mlx5: not in enabled drivers build config 00:01:41.517 vdpa/nfp: not in enabled drivers build config 00:01:41.517 vdpa/sfc: not in enabled drivers build config 00:01:41.517 event/*: missing internal dependency, "eventdev" 00:01:41.517 baseband/*: missing internal dependency, "bbdev" 00:01:41.517 gpu/*: missing internal dependency, "gpudev" 00:01:41.517 00:01:41.517 00:01:41.517 Build targets in project: 85 00:01:41.517 00:01:41.517 DPDK 24.03.0 00:01:41.517 00:01:41.517 User defined options 00:01:41.517 buildtype : debug 00:01:41.517 default_library : static 00:01:41.517 libdir : lib 00:01:41.517 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:41.517 c_args : -fPIC -Werror 00:01:41.517 c_link_args : 00:01:41.517 cpu_instruction_set: native 00:01:41.517 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:41.517 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:41.517 enable_docs : false 00:01:41.517 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:41.517 enable_kmods : false 00:01:41.517 max_lcores : 128 00:01:41.517 tests : false 00:01:41.517 00:01:41.517 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.517 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:41.517 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:41.517 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.517 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.517 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:41.517 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.517 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.517 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.517 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.517 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.517 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.517 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.517 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:41.517 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.517 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.517 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.517 [16/268] Linking static target lib/librte_kvargs.a 00:01:41.517 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.517 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.517 [19/268] Linking static target lib/librte_log.a 00:01:41.517 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.776 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:41.776 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:41.776 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:41.776 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.776 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.776 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:41.776 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:41.776 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.776 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.776 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.776 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.776 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.776 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.776 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.776 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.776 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:41.776 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:41.776 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.776 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:41.776 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.776 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.776 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.776 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.776 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.776 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.776 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:41.776 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:41.776 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.776 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.776 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.776 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.776 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.776 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.776 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.776 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:41.776 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.776 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.776 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.776 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.776 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.776 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.776 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.776 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:41.776 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:41.776 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.776 [66/268] Linking static target lib/librte_telemetry.a 00:01:41.776 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.776 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:41.776 [69/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:41.776 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:41.776 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:41.776 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.776 [73/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:41.776 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:41.776 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:41.777 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:41.777 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.777 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:41.777 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:41.777 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:41.777 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:41.777 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.777 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:41.777 [84/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:41.777 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:41.777 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:41.777 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:41.777 [88/268] Linking static target lib/librte_ring.a 00:01:41.777 [89/268] Linking static target lib/librte_pci.a 00:01:41.777 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:41.777 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:41.777 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:41.777 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:41.777 [94/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:41.777 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:41.777 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.777 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:41.777 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:41.777 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:41.777 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:41.777 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:41.777 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:41.777 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:41.777 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:41.777 [105/268] Linking static target lib/librte_eal.a 00:01:41.777 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.036 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:42.036 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:42.036 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.036 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:42.036 [111/268] Linking static target lib/librte_mempool.a 00:01:42.036 [112/268] Linking static target lib/librte_rcu.a 00:01:42.036 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:42.036 [114/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.036 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:42.036 [116/268] Linking target lib/librte_log.so.24.1 00:01:42.036 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.036 [118/268] Linking static target lib/librte_mbuf.a 00:01:42.036 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.036 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:42.036 [121/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.294 [122/268] Linking static target lib/librte_net.a 00:01:42.294 [123/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.294 [124/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:42.294 [125/268] Linking static target lib/librte_meter.a 00:01:42.294 [126/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:42.294 [127/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.294 [128/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.294 [129/268] Linking target lib/librte_kvargs.so.24.1 00:01:42.294 [130/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:42.294 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:42.294 [132/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:42.294 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:42.294 [134/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:42.294 [135/268] Linking target lib/librte_telemetry.so.24.1 00:01:42.294 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:42.294 [137/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:42.294 [138/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.294 [139/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.294 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:42.294 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:42.294 [142/268] Linking static target lib/librte_timer.a 00:01:42.294 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:42.294 [144/268] Linking static target lib/librte_cmdline.a 00:01:42.294 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.294 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:42.294 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:42.294 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:42.294 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:42.294 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:42.294 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:42.294 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:42.294 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.295 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:42.295 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:42.295 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.295 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.554 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:42.554 [159/268] Linking static target lib/librte_compressdev.a 00:01:42.554 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:42.554 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.554 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:42.554 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:42.554 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:42.554 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:42.554 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:42.554 [167/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.554 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:42.554 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.554 [170/268] Linking static target lib/librte_dmadev.a 00:01:42.554 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.554 [172/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.554 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:42.554 [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.554 [175/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.554 [176/268] Linking static target lib/librte_power.a 00:01:42.554 [177/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.554 [178/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:42.554 [179/268] Linking static target lib/librte_reorder.a 00:01:42.554 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:42.554 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.554 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.554 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.554 [184/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.554 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.554 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.554 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.554 [188/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.554 [189/268] Linking static target lib/librte_security.a 00:01:42.554 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:42.554 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.554 [192/268] Linking static target lib/librte_hash.a 00:01:42.554 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:42.554 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:42.554 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:42.554 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:42.554 [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.554 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.815 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:42.815 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.815 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.815 [202/268] Linking static target lib/librte_cryptodev.a 00:01:42.815 [203/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.815 [204/268] Linking static target drivers/librte_bus_vdev.a 00:01:42.815 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:42.815 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:42.815 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.815 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:42.815 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.815 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.815 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.815 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.815 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:42.815 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:42.815 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.815 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.074 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.074 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:43.074 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.074 [220/268] Linking static target lib/librte_ethdev.a 00:01:43.074 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.332 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.332 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.591 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:43.591 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.591 [226/268] Linking static target lib/librte_vhost.a 00:01:43.591 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.849 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.849 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.227 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.163 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.285 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.223 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.223 [234/268] Linking target lib/librte_eal.so.24.1 00:01:55.223 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:55.481 [236/268] Linking target lib/librte_meter.so.24.1 00:01:55.481 [237/268] Linking target lib/librte_pci.so.24.1 00:01:55.481 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:55.481 [239/268] Linking target lib/librte_ring.so.24.1 00:01:55.481 [240/268] Linking target lib/librte_timer.so.24.1 00:01:55.481 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:55.481 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:55.481 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:55.481 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:55.481 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:55.481 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:55.739 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:55.739 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:55.739 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:55.739 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:55.739 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:55.739 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:55.739 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:55.998 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:55.998 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:55.998 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:55.998 [257/268] Linking target lib/librte_net.so.24.1 00:01:55.998 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:56.258 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:56.258 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:56.258 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:56.258 [262/268] Linking target lib/librte_hash.so.24.1 00:01:56.258 [263/268] Linking target lib/librte_security.so.24.1 00:01:56.258 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:56.517 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:56.517 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:56.517 [267/268] Linking target lib/librte_power.so.24.1 00:01:56.517 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:56.517 INFO: autodetecting backend as ninja 00:01:56.517 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:01:57.454 CC lib/ut/ut.o 00:01:57.454 CC lib/ut_mock/mock.o 00:01:57.454 CC lib/log/log.o 00:01:57.454 CC lib/log/log_flags.o 00:01:57.454 CC lib/log/log_deprecated.o 00:01:57.713 LIB libspdk_ut.a 00:01:57.713 LIB libspdk_log.a 00:01:57.713 LIB libspdk_ut_mock.a 00:01:57.970 CC lib/dma/dma.o 00:01:57.970 CC lib/util/base64.o 00:01:57.970 CC lib/util/bit_array.o 00:01:57.970 CC lib/util/cpuset.o 00:01:57.970 CC lib/util/crc16.o 00:01:57.970 CXX lib/trace_parser/trace.o 00:01:57.970 CC lib/util/crc32.o 00:01:57.970 CC lib/util/crc32c.o 00:01:57.970 CC lib/ioat/ioat.o 00:01:57.970 CC lib/util/crc32_ieee.o 00:01:57.970 CC lib/util/crc64.o 00:01:57.970 CC lib/util/dif.o 00:01:57.970 CC lib/util/fd.o 00:01:57.970 CC lib/util/file.o 00:01:57.970 CC lib/util/iov.o 00:01:57.970 CC lib/util/hexlify.o 00:01:57.970 CC lib/util/math.o 00:01:57.970 CC lib/util/pipe.o 00:01:57.970 CC lib/util/strerror_tls.o 00:01:57.970 CC lib/util/string.o 00:01:57.970 CC lib/util/uuid.o 00:01:57.970 CC lib/util/fd_group.o 00:01:57.970 CC lib/util/xor.o 00:01:57.970 CC lib/util/zipf.o 00:01:58.228 CC lib/vfio_user/host/vfio_user_pci.o 00:01:58.228 CC lib/vfio_user/host/vfio_user.o 00:01:58.228 LIB libspdk_dma.a 00:01:58.228 LIB libspdk_ioat.a 00:01:58.228 LIB libspdk_vfio_user.a 00:01:58.228 LIB libspdk_util.a 00:01:58.487 LIB libspdk_trace_parser.a 00:01:58.745 CC lib/rdma_utils/rdma_utils.o 00:01:58.745 CC lib/rdma_provider/common.o 00:01:58.745 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:58.745 CC lib/vmd/led.o 00:01:58.745 CC lib/vmd/vmd.o 00:01:58.745 CC lib/json/json_parse.o 00:01:58.745 CC lib/conf/conf.o 00:01:58.745 CC lib/json/json_util.o 00:01:58.745 CC lib/json/json_write.o 00:01:58.745 CC lib/idxd/idxd.o 00:01:58.745 CC lib/idxd/idxd_user.o 00:01:58.745 CC lib/idxd/idxd_kernel.o 00:01:58.745 CC lib/env_dpdk/env.o 00:01:58.745 CC lib/env_dpdk/memory.o 00:01:58.745 CC lib/env_dpdk/pci.o 00:01:58.745 CC lib/env_dpdk/init.o 00:01:58.745 CC lib/env_dpdk/threads.o 00:01:58.745 CC lib/env_dpdk/pci_ioat.o 00:01:58.745 CC lib/env_dpdk/pci_virtio.o 00:01:58.745 CC lib/env_dpdk/pci_vmd.o 00:01:58.745 CC lib/env_dpdk/pci_idxd.o 00:01:58.745 CC lib/env_dpdk/pci_event.o 00:01:58.745 CC lib/env_dpdk/sigbus_handler.o 00:01:58.745 CC lib/env_dpdk/pci_dpdk.o 00:01:58.745 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:58.745 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:58.745 LIB libspdk_rdma_provider.a 00:01:58.745 LIB libspdk_conf.a 00:01:58.745 LIB libspdk_rdma_utils.a 00:01:58.745 LIB libspdk_json.a 00:01:59.002 LIB libspdk_idxd.a 00:01:59.002 LIB libspdk_vmd.a 00:01:59.261 CC lib/jsonrpc/jsonrpc_server.o 00:01:59.261 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:59.261 CC lib/jsonrpc/jsonrpc_client.o 00:01:59.261 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:59.261 LIB libspdk_jsonrpc.a 00:01:59.520 LIB libspdk_env_dpdk.a 00:01:59.779 CC lib/rpc/rpc.o 00:01:59.779 LIB libspdk_rpc.a 00:02:00.348 CC lib/notify/notify.o 00:02:00.348 CC lib/notify/notify_rpc.o 00:02:00.348 CC lib/trace/trace.o 00:02:00.348 CC lib/trace/trace_flags.o 00:02:00.348 CC lib/trace/trace_rpc.o 00:02:00.348 CC lib/keyring/keyring.o 00:02:00.348 CC lib/keyring/keyring_rpc.o 00:02:00.348 LIB libspdk_notify.a 00:02:00.348 LIB libspdk_keyring.a 00:02:00.348 LIB libspdk_trace.a 00:02:00.915 CC lib/sock/sock.o 00:02:00.915 CC lib/sock/sock_rpc.o 00:02:00.915 CC lib/thread/thread.o 00:02:00.915 CC lib/thread/iobuf.o 00:02:00.915 LIB libspdk_sock.a 00:02:01.482 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:01.482 CC lib/nvme/nvme_ctrlr.o 00:02:01.482 CC lib/nvme/nvme_fabric.o 00:02:01.482 CC lib/nvme/nvme_ns_cmd.o 00:02:01.482 CC lib/nvme/nvme_ns.o 00:02:01.482 CC lib/nvme/nvme_pcie_common.o 00:02:01.482 CC lib/nvme/nvme_pcie.o 00:02:01.482 CC lib/nvme/nvme_qpair.o 00:02:01.482 CC lib/nvme/nvme.o 00:02:01.482 CC lib/nvme/nvme_quirks.o 00:02:01.482 CC lib/nvme/nvme_transport.o 00:02:01.482 CC lib/nvme/nvme_discovery.o 00:02:01.482 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:01.482 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:01.482 CC lib/nvme/nvme_tcp.o 00:02:01.482 CC lib/nvme/nvme_opal.o 00:02:01.482 CC lib/nvme/nvme_io_msg.o 00:02:01.482 CC lib/nvme/nvme_poll_group.o 00:02:01.482 CC lib/nvme/nvme_zns.o 00:02:01.482 CC lib/nvme/nvme_stubs.o 00:02:01.482 CC lib/nvme/nvme_auth.o 00:02:01.482 CC lib/nvme/nvme_cuse.o 00:02:01.482 CC lib/nvme/nvme_vfio_user.o 00:02:01.482 CC lib/nvme/nvme_rdma.o 00:02:01.482 LIB libspdk_thread.a 00:02:01.740 CC lib/accel/accel.o 00:02:01.740 CC lib/accel/accel_rpc.o 00:02:01.740 CC lib/accel/accel_sw.o 00:02:01.740 CC lib/virtio/virtio.o 00:02:01.740 CC lib/virtio/virtio_pci.o 00:02:01.740 CC lib/virtio/virtio_vhost_user.o 00:02:01.740 CC lib/virtio/virtio_vfio_user.o 00:02:01.740 CC lib/blob/zeroes.o 00:02:01.740 CC lib/blob/blobstore.o 00:02:01.999 CC lib/blob/request.o 00:02:01.999 CC lib/blob/blob_bs_dev.o 00:02:01.999 CC lib/vfu_tgt/tgt_rpc.o 00:02:01.999 CC lib/vfu_tgt/tgt_endpoint.o 00:02:01.999 CC lib/init/json_config.o 00:02:01.999 CC lib/init/subsystem.o 00:02:01.999 CC lib/init/subsystem_rpc.o 00:02:01.999 CC lib/init/rpc.o 00:02:01.999 LIB libspdk_init.a 00:02:01.999 LIB libspdk_virtio.a 00:02:01.999 LIB libspdk_vfu_tgt.a 00:02:02.567 CC lib/event/app.o 00:02:02.567 CC lib/event/reactor.o 00:02:02.567 CC lib/event/log_rpc.o 00:02:02.567 CC lib/event/app_rpc.o 00:02:02.567 CC lib/event/scheduler_static.o 00:02:02.567 LIB libspdk_accel.a 00:02:02.567 LIB libspdk_event.a 00:02:02.826 LIB libspdk_nvme.a 00:02:02.826 CC lib/bdev/bdev.o 00:02:02.826 CC lib/bdev/bdev_rpc.o 00:02:02.826 CC lib/bdev/bdev_zone.o 00:02:02.826 CC lib/bdev/part.o 00:02:02.826 CC lib/bdev/scsi_nvme.o 00:02:03.763 LIB libspdk_blob.a 00:02:04.021 CC lib/blobfs/blobfs.o 00:02:04.021 CC lib/blobfs/tree.o 00:02:04.021 CC lib/lvol/lvol.o 00:02:04.588 LIB libspdk_lvol.a 00:02:04.588 LIB libspdk_blobfs.a 00:02:04.588 LIB libspdk_bdev.a 00:02:04.847 CC lib/ublk/ublk.o 00:02:04.847 CC lib/ublk/ublk_rpc.o 00:02:04.847 CC lib/nvmf/ctrlr_discovery.o 00:02:04.847 CC lib/nvmf/ctrlr.o 00:02:04.847 CC lib/nvmf/ctrlr_bdev.o 00:02:04.847 CC lib/nvmf/subsystem.o 00:02:04.847 CC lib/nvmf/nvmf.o 00:02:04.847 CC lib/ftl/ftl_core.o 00:02:04.847 CC lib/nvmf/nvmf_rpc.o 00:02:04.847 CC lib/nbd/nbd.o 00:02:04.847 CC lib/ftl/ftl_init.o 00:02:04.847 CC lib/scsi/dev.o 00:02:04.847 CC lib/ftl/ftl_layout.o 00:02:04.847 CC lib/nvmf/transport.o 00:02:04.847 CC lib/nbd/nbd_rpc.o 00:02:04.847 CC lib/scsi/lun.o 00:02:04.847 CC lib/ftl/ftl_debug.o 00:02:04.847 CC lib/ftl/ftl_io.o 00:02:04.847 CC lib/nvmf/tcp.o 00:02:04.847 CC lib/scsi/port.o 00:02:04.847 CC lib/ftl/ftl_sb.o 00:02:04.847 CC lib/scsi/scsi_bdev.o 00:02:04.847 CC lib/scsi/scsi.o 00:02:04.847 CC lib/nvmf/stubs.o 00:02:04.847 CC lib/ftl/ftl_l2p.o 00:02:04.847 CC lib/nvmf/mdns_server.o 00:02:04.848 CC lib/ftl/ftl_l2p_flat.o 00:02:04.848 CC lib/nvmf/vfio_user.o 00:02:04.848 CC lib/ftl/ftl_nv_cache.o 00:02:04.848 CC lib/scsi/scsi_rpc.o 00:02:04.848 CC lib/scsi/scsi_pr.o 00:02:04.848 CC lib/nvmf/rdma.o 00:02:04.848 CC lib/scsi/task.o 00:02:04.848 CC lib/ftl/ftl_band.o 00:02:04.848 CC lib/ftl/ftl_band_ops.o 00:02:04.848 CC lib/nvmf/auth.o 00:02:04.848 CC lib/ftl/ftl_writer.o 00:02:04.848 CC lib/ftl/ftl_rq.o 00:02:04.848 CC lib/ftl/ftl_reloc.o 00:02:04.848 CC lib/ftl/ftl_l2p_cache.o 00:02:04.848 CC lib/ftl/ftl_p2l.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.848 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.848 CC lib/ftl/utils/ftl_md.o 00:02:04.848 CC lib/ftl/utils/ftl_mempool.o 00:02:04.848 CC lib/ftl/utils/ftl_conf.o 00:02:05.106 CC lib/ftl/utils/ftl_bitmap.o 00:02:05.106 CC lib/ftl/utils/ftl_property.o 00:02:05.106 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:05.106 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:05.106 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:05.106 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:05.106 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:05.106 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:05.106 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:05.106 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:05.106 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:05.106 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:05.106 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:05.106 CC lib/ftl/base/ftl_base_dev.o 00:02:05.106 CC lib/ftl/base/ftl_base_bdev.o 00:02:05.106 CC lib/ftl/ftl_trace.o 00:02:05.373 LIB libspdk_nbd.a 00:02:05.373 LIB libspdk_scsi.a 00:02:05.632 LIB libspdk_ublk.a 00:02:05.632 LIB libspdk_ftl.a 00:02:05.890 CC lib/vhost/vhost.o 00:02:05.890 CC lib/iscsi/conn.o 00:02:05.890 CC lib/iscsi/init_grp.o 00:02:05.890 CC lib/vhost/vhost_rpc.o 00:02:05.890 CC lib/iscsi/iscsi.o 00:02:05.890 CC lib/vhost/vhost_scsi.o 00:02:05.890 CC lib/vhost/vhost_blk.o 00:02:05.890 CC lib/iscsi/md5.o 00:02:05.890 CC lib/iscsi/param.o 00:02:05.890 CC lib/vhost/rte_vhost_user.o 00:02:05.890 CC lib/iscsi/portal_grp.o 00:02:05.890 CC lib/iscsi/tgt_node.o 00:02:05.890 CC lib/iscsi/task.o 00:02:05.890 CC lib/iscsi/iscsi_subsystem.o 00:02:05.890 CC lib/iscsi/iscsi_rpc.o 00:02:06.457 LIB libspdk_nvmf.a 00:02:06.457 LIB libspdk_vhost.a 00:02:06.716 LIB libspdk_iscsi.a 00:02:06.975 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.234 CC module/vfu_device/vfu_virtio.o 00:02:07.234 CC module/vfu_device/vfu_virtio_blk.o 00:02:07.234 CC module/vfu_device/vfu_virtio_scsi.o 00:02:07.234 CC module/vfu_device/vfu_virtio_rpc.o 00:02:07.234 LIB libspdk_env_dpdk_rpc.a 00:02:07.234 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.234 CC module/accel/dsa/accel_dsa.o 00:02:07.234 CC module/accel/error/accel_error_rpc.o 00:02:07.234 CC module/accel/error/accel_error.o 00:02:07.234 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.234 CC module/accel/ioat/accel_ioat.o 00:02:07.234 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.234 CC module/sock/posix/posix.o 00:02:07.234 CC module/blob/bdev/blob_bdev.o 00:02:07.234 CC module/keyring/linux/keyring.o 00:02:07.234 CC module/keyring/file/keyring.o 00:02:07.234 CC module/accel/iaa/accel_iaa.o 00:02:07.234 CC module/keyring/linux/keyring_rpc.o 00:02:07.234 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.234 CC module/keyring/file/keyring_rpc.o 00:02:07.234 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.234 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.234 LIB libspdk_keyring_file.a 00:02:07.234 LIB libspdk_keyring_linux.a 00:02:07.492 LIB libspdk_accel_error.a 00:02:07.492 LIB libspdk_scheduler_gscheduler.a 00:02:07.492 LIB libspdk_scheduler_dynamic.a 00:02:07.492 LIB libspdk_accel_ioat.a 00:02:07.492 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.492 LIB libspdk_accel_iaa.a 00:02:07.492 LIB libspdk_accel_dsa.a 00:02:07.492 LIB libspdk_blob_bdev.a 00:02:07.492 LIB libspdk_vfu_device.a 00:02:07.751 LIB libspdk_sock_posix.a 00:02:07.751 CC module/bdev/gpt/vbdev_gpt.o 00:02:07.751 CC module/bdev/gpt/gpt.o 00:02:07.751 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:07.751 CC module/bdev/error/vbdev_error.o 00:02:07.751 CC module/bdev/lvol/vbdev_lvol.o 00:02:07.751 CC module/bdev/error/vbdev_error_rpc.o 00:02:07.751 CC module/bdev/delay/vbdev_delay.o 00:02:07.751 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:07.751 CC module/bdev/nvme/bdev_nvme.o 00:02:07.751 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:07.751 CC module/bdev/nvme/nvme_rpc.o 00:02:07.751 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:07.751 CC module/bdev/ftl/bdev_ftl.o 00:02:07.751 CC module/bdev/malloc/bdev_malloc.o 00:02:07.751 CC module/bdev/nvme/bdev_mdns_client.o 00:02:07.751 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:07.751 CC module/bdev/nvme/vbdev_opal.o 00:02:07.751 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:07.751 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:07.751 CC module/bdev/passthru/vbdev_passthru.o 00:02:07.751 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:07.751 CC module/bdev/raid/bdev_raid_rpc.o 00:02:07.751 CC module/bdev/raid/bdev_raid.o 00:02:07.751 CC module/bdev/raid/bdev_raid_sb.o 00:02:07.751 CC module/bdev/null/bdev_null_rpc.o 00:02:07.751 CC module/bdev/null/bdev_null.o 00:02:07.751 CC module/bdev/raid/raid0.o 00:02:07.751 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.009 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.009 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.009 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.009 CC module/bdev/raid/raid1.o 00:02:08.009 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.009 CC module/bdev/raid/concat.o 00:02:08.009 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.009 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.009 CC module/bdev/aio/bdev_aio.o 00:02:08.009 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.009 CC module/bdev/split/vbdev_split.o 00:02:08.009 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.009 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.009 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.009 LIB libspdk_bdev_gpt.a 00:02:08.009 LIB libspdk_bdev_error.a 00:02:08.009 LIB libspdk_bdev_null.a 00:02:08.009 LIB libspdk_bdev_aio.a 00:02:08.009 LIB libspdk_blobfs_bdev.a 00:02:08.267 LIB libspdk_bdev_ftl.a 00:02:08.267 LIB libspdk_bdev_delay.a 00:02:08.267 LIB libspdk_bdev_split.a 00:02:08.267 LIB libspdk_bdev_malloc.a 00:02:08.267 LIB libspdk_bdev_passthru.a 00:02:08.267 LIB libspdk_bdev_zone_block.a 00:02:08.267 LIB libspdk_bdev_iscsi.a 00:02:08.267 LIB libspdk_bdev_lvol.a 00:02:08.267 LIB libspdk_bdev_virtio.a 00:02:08.526 LIB libspdk_bdev_raid.a 00:02:09.115 LIB libspdk_bdev_nvme.a 00:02:09.730 CC module/event/subsystems/iobuf/iobuf.o 00:02:09.730 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:09.730 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:09.730 CC module/event/subsystems/scheduler/scheduler.o 00:02:09.730 CC module/event/subsystems/keyring/keyring.o 00:02:09.730 CC module/event/subsystems/vmd/vmd.o 00:02:09.730 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:09.730 CC module/event/subsystems/sock/sock.o 00:02:09.730 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:10.027 LIB libspdk_event_keyring.a 00:02:10.027 LIB libspdk_event_scheduler.a 00:02:10.027 LIB libspdk_event_vhost_blk.a 00:02:10.027 LIB libspdk_event_vmd.a 00:02:10.027 LIB libspdk_event_sock.a 00:02:10.027 LIB libspdk_event_iobuf.a 00:02:10.027 LIB libspdk_event_vfu_tgt.a 00:02:10.285 CC module/event/subsystems/accel/accel.o 00:02:10.285 LIB libspdk_event_accel.a 00:02:10.852 CC module/event/subsystems/bdev/bdev.o 00:02:10.852 LIB libspdk_event_bdev.a 00:02:11.111 CC module/event/subsystems/scsi/scsi.o 00:02:11.111 CC module/event/subsystems/ublk/ublk.o 00:02:11.111 CC module/event/subsystems/nbd/nbd.o 00:02:11.111 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:11.111 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:11.370 LIB libspdk_event_nbd.a 00:02:11.370 LIB libspdk_event_ublk.a 00:02:11.370 LIB libspdk_event_scsi.a 00:02:11.370 LIB libspdk_event_nvmf.a 00:02:11.628 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:11.628 CC module/event/subsystems/iscsi/iscsi.o 00:02:11.887 LIB libspdk_event_vhost_scsi.a 00:02:11.887 LIB libspdk_event_iscsi.a 00:02:12.149 CC app/spdk_lspci/spdk_lspci.o 00:02:12.149 CXX app/trace/trace.o 00:02:12.149 CC app/trace_record/trace_record.o 00:02:12.149 CC app/spdk_top/spdk_top.o 00:02:12.149 CC app/spdk_nvme_perf/perf.o 00:02:12.150 TEST_HEADER include/spdk/accel.h 00:02:12.150 TEST_HEADER include/spdk/accel_module.h 00:02:12.150 TEST_HEADER include/spdk/barrier.h 00:02:12.150 TEST_HEADER include/spdk/assert.h 00:02:12.150 TEST_HEADER include/spdk/base64.h 00:02:12.150 TEST_HEADER include/spdk/bdev.h 00:02:12.150 TEST_HEADER include/spdk/bdev_module.h 00:02:12.150 CC test/rpc_client/rpc_client_test.o 00:02:12.150 CC app/spdk_nvme_identify/identify.o 00:02:12.150 TEST_HEADER include/spdk/bdev_zone.h 00:02:12.150 TEST_HEADER include/spdk/bit_array.h 00:02:12.150 TEST_HEADER include/spdk/bit_pool.h 00:02:12.150 TEST_HEADER include/spdk/blob_bdev.h 00:02:12.150 TEST_HEADER include/spdk/blobfs.h 00:02:12.150 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:12.150 TEST_HEADER include/spdk/conf.h 00:02:12.150 TEST_HEADER include/spdk/blob.h 00:02:12.150 TEST_HEADER include/spdk/config.h 00:02:12.150 TEST_HEADER include/spdk/cpuset.h 00:02:12.150 TEST_HEADER include/spdk/crc16.h 00:02:12.150 TEST_HEADER include/spdk/crc32.h 00:02:12.150 TEST_HEADER include/spdk/crc64.h 00:02:12.150 TEST_HEADER include/spdk/dif.h 00:02:12.150 CC app/spdk_nvme_discover/discovery_aer.o 00:02:12.150 TEST_HEADER include/spdk/dma.h 00:02:12.150 TEST_HEADER include/spdk/endian.h 00:02:12.150 TEST_HEADER include/spdk/env.h 00:02:12.150 TEST_HEADER include/spdk/env_dpdk.h 00:02:12.150 TEST_HEADER include/spdk/event.h 00:02:12.150 TEST_HEADER include/spdk/fd.h 00:02:12.150 TEST_HEADER include/spdk/fd_group.h 00:02:12.150 TEST_HEADER include/spdk/file.h 00:02:12.150 TEST_HEADER include/spdk/ftl.h 00:02:12.150 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:12.150 TEST_HEADER include/spdk/gpt_spec.h 00:02:12.150 TEST_HEADER include/spdk/hexlify.h 00:02:12.150 TEST_HEADER include/spdk/histogram_data.h 00:02:12.150 TEST_HEADER include/spdk/idxd.h 00:02:12.150 TEST_HEADER include/spdk/idxd_spec.h 00:02:12.150 TEST_HEADER include/spdk/init.h 00:02:12.150 TEST_HEADER include/spdk/ioat.h 00:02:12.150 TEST_HEADER include/spdk/ioat_spec.h 00:02:12.150 TEST_HEADER include/spdk/iscsi_spec.h 00:02:12.150 TEST_HEADER include/spdk/json.h 00:02:12.150 TEST_HEADER include/spdk/keyring.h 00:02:12.150 TEST_HEADER include/spdk/jsonrpc.h 00:02:12.150 TEST_HEADER include/spdk/keyring_module.h 00:02:12.150 TEST_HEADER include/spdk/likely.h 00:02:12.150 TEST_HEADER include/spdk/log.h 00:02:12.150 TEST_HEADER include/spdk/lvol.h 00:02:12.150 TEST_HEADER include/spdk/memory.h 00:02:12.150 TEST_HEADER include/spdk/mmio.h 00:02:12.150 TEST_HEADER include/spdk/nbd.h 00:02:12.150 TEST_HEADER include/spdk/notify.h 00:02:12.150 TEST_HEADER include/spdk/nvme.h 00:02:12.150 TEST_HEADER include/spdk/nvme_intel.h 00:02:12.150 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:12.150 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:12.150 TEST_HEADER include/spdk/nvme_spec.h 00:02:12.150 TEST_HEADER include/spdk/nvme_zns.h 00:02:12.150 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:12.150 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:12.150 TEST_HEADER include/spdk/nvmf_spec.h 00:02:12.150 TEST_HEADER include/spdk/nvmf.h 00:02:12.150 TEST_HEADER include/spdk/nvmf_transport.h 00:02:12.150 TEST_HEADER include/spdk/opal.h 00:02:12.150 TEST_HEADER include/spdk/opal_spec.h 00:02:12.150 TEST_HEADER include/spdk/pci_ids.h 00:02:12.150 TEST_HEADER include/spdk/queue.h 00:02:12.150 TEST_HEADER include/spdk/pipe.h 00:02:12.150 CC app/spdk_dd/spdk_dd.o 00:02:12.150 TEST_HEADER include/spdk/reduce.h 00:02:12.150 TEST_HEADER include/spdk/rpc.h 00:02:12.150 TEST_HEADER include/spdk/scheduler.h 00:02:12.150 TEST_HEADER include/spdk/scsi.h 00:02:12.150 TEST_HEADER include/spdk/scsi_spec.h 00:02:12.150 TEST_HEADER include/spdk/sock.h 00:02:12.150 TEST_HEADER include/spdk/stdinc.h 00:02:12.150 TEST_HEADER include/spdk/string.h 00:02:12.150 TEST_HEADER include/spdk/thread.h 00:02:12.150 TEST_HEADER include/spdk/trace.h 00:02:12.150 TEST_HEADER include/spdk/trace_parser.h 00:02:12.150 TEST_HEADER include/spdk/tree.h 00:02:12.150 TEST_HEADER include/spdk/ublk.h 00:02:12.150 TEST_HEADER include/spdk/util.h 00:02:12.150 TEST_HEADER include/spdk/uuid.h 00:02:12.150 TEST_HEADER include/spdk/version.h 00:02:12.150 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:12.150 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:12.150 TEST_HEADER include/spdk/vhost.h 00:02:12.150 TEST_HEADER include/spdk/vmd.h 00:02:12.150 TEST_HEADER include/spdk/xor.h 00:02:12.150 TEST_HEADER include/spdk/zipf.h 00:02:12.150 CXX test/cpp_headers/accel.o 00:02:12.150 CXX test/cpp_headers/accel_module.o 00:02:12.150 CXX test/cpp_headers/assert.o 00:02:12.150 CXX test/cpp_headers/barrier.o 00:02:12.150 CXX test/cpp_headers/bdev.o 00:02:12.150 CXX test/cpp_headers/base64.o 00:02:12.150 CXX test/cpp_headers/bdev_module.o 00:02:12.150 CXX test/cpp_headers/bdev_zone.o 00:02:12.150 CXX test/cpp_headers/bit_array.o 00:02:12.150 CXX test/cpp_headers/bit_pool.o 00:02:12.150 CC app/iscsi_tgt/iscsi_tgt.o 00:02:12.150 CXX test/cpp_headers/blob_bdev.o 00:02:12.150 CXX test/cpp_headers/blobfs_bdev.o 00:02:12.150 CXX test/cpp_headers/blobfs.o 00:02:12.150 CC app/nvmf_tgt/nvmf_main.o 00:02:12.150 CXX test/cpp_headers/conf.o 00:02:12.150 CXX test/cpp_headers/blob.o 00:02:12.150 CXX test/cpp_headers/config.o 00:02:12.150 CXX test/cpp_headers/crc16.o 00:02:12.150 CXX test/cpp_headers/cpuset.o 00:02:12.150 CXX test/cpp_headers/crc32.o 00:02:12.150 CXX test/cpp_headers/crc64.o 00:02:12.150 CXX test/cpp_headers/dif.o 00:02:12.150 CXX test/cpp_headers/dma.o 00:02:12.150 CXX test/cpp_headers/endian.o 00:02:12.150 CXX test/cpp_headers/env_dpdk.o 00:02:12.150 CXX test/cpp_headers/env.o 00:02:12.150 CXX test/cpp_headers/event.o 00:02:12.150 CXX test/cpp_headers/fd_group.o 00:02:12.150 CXX test/cpp_headers/fd.o 00:02:12.150 CXX test/cpp_headers/file.o 00:02:12.150 CC app/spdk_tgt/spdk_tgt.o 00:02:12.150 CXX test/cpp_headers/ftl.o 00:02:12.150 CXX test/cpp_headers/gpt_spec.o 00:02:12.150 CXX test/cpp_headers/hexlify.o 00:02:12.150 CXX test/cpp_headers/histogram_data.o 00:02:12.150 CXX test/cpp_headers/idxd.o 00:02:12.150 CXX test/cpp_headers/idxd_spec.o 00:02:12.150 CXX test/cpp_headers/init.o 00:02:12.150 CXX test/cpp_headers/ioat.o 00:02:12.150 CXX test/cpp_headers/ioat_spec.o 00:02:12.150 CXX test/cpp_headers/iscsi_spec.o 00:02:12.150 CXX test/cpp_headers/json.o 00:02:12.150 CC examples/ioat/verify/verify.o 00:02:12.150 CXX test/cpp_headers/jsonrpc.o 00:02:12.150 CC examples/ioat/perf/perf.o 00:02:12.150 CC test/app/jsoncat/jsoncat.o 00:02:12.150 CC test/env/pci/pci_ut.o 00:02:12.150 CC test/thread/poller_perf/poller_perf.o 00:02:12.150 CC test/app/stub/stub.o 00:02:12.150 CC test/app/histogram_perf/histogram_perf.o 00:02:12.150 CC test/thread/lock/spdk_lock.o 00:02:12.150 CC app/fio/nvme/fio_plugin.o 00:02:12.150 CC test/env/vtophys/vtophys.o 00:02:12.151 CC examples/util/zipf/zipf.o 00:02:12.151 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:12.151 CC test/env/memory/memory_ut.o 00:02:12.151 CXX test/cpp_headers/keyring.o 00:02:12.151 LINK spdk_lspci 00:02:12.151 CC test/dma/test_dma/test_dma.o 00:02:12.413 CC app/fio/bdev/fio_plugin.o 00:02:12.413 CC test/app/bdev_svc/bdev_svc.o 00:02:12.413 CC test/env/mem_callbacks/mem_callbacks.o 00:02:12.413 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:12.413 LINK rpc_client_test 00:02:12.413 LINK spdk_nvme_discover 00:02:12.413 LINK interrupt_tgt 00:02:12.413 LINK spdk_trace_record 00:02:12.413 CXX test/cpp_headers/keyring_module.o 00:02:12.413 LINK jsoncat 00:02:12.413 CXX test/cpp_headers/likely.o 00:02:12.413 CXX test/cpp_headers/log.o 00:02:12.413 CXX test/cpp_headers/lvol.o 00:02:12.413 CXX test/cpp_headers/memory.o 00:02:12.413 CXX test/cpp_headers/mmio.o 00:02:12.413 CXX test/cpp_headers/nbd.o 00:02:12.413 LINK vtophys 00:02:12.413 CXX test/cpp_headers/notify.o 00:02:12.413 CXX test/cpp_headers/nvme.o 00:02:12.413 CXX test/cpp_headers/nvme_intel.o 00:02:12.413 CXX test/cpp_headers/nvme_ocssd.o 00:02:12.413 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:12.413 LINK poller_perf 00:02:12.413 CXX test/cpp_headers/nvme_spec.o 00:02:12.413 CXX test/cpp_headers/nvme_zns.o 00:02:12.413 CXX test/cpp_headers/nvmf_cmd.o 00:02:12.413 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:12.413 CXX test/cpp_headers/nvmf.o 00:02:12.413 CXX test/cpp_headers/nvmf_spec.o 00:02:12.413 CXX test/cpp_headers/nvmf_transport.o 00:02:12.413 LINK histogram_perf 00:02:12.413 CXX test/cpp_headers/opal.o 00:02:12.413 CXX test/cpp_headers/opal_spec.o 00:02:12.413 CXX test/cpp_headers/pci_ids.o 00:02:12.413 CXX test/cpp_headers/pipe.o 00:02:12.413 CXX test/cpp_headers/queue.o 00:02:12.413 CXX test/cpp_headers/reduce.o 00:02:12.413 LINK zipf 00:02:12.413 CXX test/cpp_headers/rpc.o 00:02:12.413 CXX test/cpp_headers/scheduler.o 00:02:12.413 CXX test/cpp_headers/scsi.o 00:02:12.413 CXX test/cpp_headers/scsi_spec.o 00:02:12.413 CXX test/cpp_headers/sock.o 00:02:12.413 LINK env_dpdk_post_init 00:02:12.413 CXX test/cpp_headers/stdinc.o 00:02:12.413 CXX test/cpp_headers/string.o 00:02:12.413 CXX test/cpp_headers/thread.o 00:02:12.413 CXX test/cpp_headers/trace.o 00:02:12.413 CXX test/cpp_headers/trace_parser.o 00:02:12.413 CXX test/cpp_headers/tree.o 00:02:12.413 CXX test/cpp_headers/ublk.o 00:02:12.413 LINK stub 00:02:12.413 CXX test/cpp_headers/util.o 00:02:12.413 LINK nvmf_tgt 00:02:12.413 LINK verify 00:02:12.413 LINK ioat_perf 00:02:12.413 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:12.414 struct spdk_nvme_fdp_ruhs ruhs; 00:02:12.414 ^ 00:02:12.414 LINK iscsi_tgt 00:02:12.414 CXX test/cpp_headers/uuid.o 00:02:12.414 LINK spdk_tgt 00:02:12.414 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:12.671 CXX test/cpp_headers/version.o 00:02:12.671 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:12.671 LINK bdev_svc 00:02:12.671 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:12.671 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:12.671 LINK spdk_trace 00:02:12.671 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:12.671 CXX test/cpp_headers/vfio_user_pci.o 00:02:12.671 CXX test/cpp_headers/vfio_user_spec.o 00:02:12.671 CXX test/cpp_headers/vhost.o 00:02:12.671 CXX test/cpp_headers/vmd.o 00:02:12.671 CXX test/cpp_headers/xor.o 00:02:12.671 CXX test/cpp_headers/zipf.o 00:02:12.671 LINK test_dma 00:02:12.671 LINK spdk_dd 00:02:12.671 LINK pci_ut 00:02:12.927 1 warning generated. 00:02:12.927 LINK nvme_fuzz 00:02:12.927 LINK spdk_bdev 00:02:12.927 LINK spdk_nvme 00:02:12.927 LINK llvm_vfio_fuzz 00:02:12.927 LINK spdk_nvme_identify 00:02:12.927 LINK spdk_nvme_perf 00:02:12.927 LINK mem_callbacks 00:02:12.927 LINK vhost_fuzz 00:02:13.184 LINK spdk_top 00:02:13.184 CC examples/sock/hello_world/hello_sock.o 00:02:13.184 CC examples/idxd/perf/perf.o 00:02:13.184 CC examples/vmd/led/led.o 00:02:13.184 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.184 LINK llvm_nvme_fuzz 00:02:13.184 CC app/vhost/vhost.o 00:02:13.184 CC examples/thread/thread/thread_ex.o 00:02:13.184 LINK memory_ut 00:02:13.442 LINK led 00:02:13.442 LINK lsvmd 00:02:13.442 LINK hello_sock 00:02:13.442 LINK vhost 00:02:13.442 LINK idxd_perf 00:02:13.442 LINK thread 00:02:13.442 LINK spdk_lock 00:02:13.700 LINK iscsi_fuzz 00:02:14.265 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:14.265 CC examples/nvme/hello_world/hello_world.o 00:02:14.265 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:14.265 CC examples/nvme/reconnect/reconnect.o 00:02:14.265 CC examples/nvme/hotplug/hotplug.o 00:02:14.265 CC examples/nvme/arbitration/arbitration.o 00:02:14.265 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:14.265 CC examples/nvme/abort/abort.o 00:02:14.265 CC test/event/reactor_perf/reactor_perf.o 00:02:14.265 CC test/event/event_perf/event_perf.o 00:02:14.265 CC test/event/reactor/reactor.o 00:02:14.265 CC test/event/app_repeat/app_repeat.o 00:02:14.265 CC test/event/scheduler/scheduler.o 00:02:14.265 LINK pmr_persistence 00:02:14.265 LINK reactor 00:02:14.265 LINK hello_world 00:02:14.265 LINK app_repeat 00:02:14.265 LINK cmb_copy 00:02:14.265 LINK reactor_perf 00:02:14.265 LINK event_perf 00:02:14.265 LINK hotplug 00:02:14.265 LINK reconnect 00:02:14.522 LINK arbitration 00:02:14.522 LINK scheduler 00:02:14.522 LINK nvme_manage 00:02:14.522 LINK abort 00:02:14.780 CC test/nvme/aer/aer.o 00:02:14.780 CC test/nvme/e2edp/nvme_dp.o 00:02:14.780 CC test/nvme/simple_copy/simple_copy.o 00:02:14.780 CC test/nvme/reserve/reserve.o 00:02:14.780 CC test/nvme/sgl/sgl.o 00:02:14.780 CC test/nvme/reset/reset.o 00:02:14.780 CC test/nvme/fdp/fdp.o 00:02:14.780 CC test/nvme/fused_ordering/fused_ordering.o 00:02:14.780 CC test/nvme/err_injection/err_injection.o 00:02:14.780 CC test/nvme/overhead/overhead.o 00:02:14.780 CC test/nvme/connect_stress/connect_stress.o 00:02:14.780 CC test/nvme/startup/startup.o 00:02:14.780 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:14.780 CC test/nvme/boot_partition/boot_partition.o 00:02:14.780 CC test/nvme/compliance/nvme_compliance.o 00:02:14.780 CC test/nvme/cuse/cuse.o 00:02:14.780 CC test/blobfs/mkfs/mkfs.o 00:02:14.780 CC test/accel/dif/dif.o 00:02:14.780 CC test/lvol/esnap/esnap.o 00:02:14.780 LINK startup 00:02:14.780 LINK boot_partition 00:02:14.780 LINK connect_stress 00:02:14.780 LINK doorbell_aers 00:02:14.780 LINK reserve 00:02:14.780 LINK fused_ordering 00:02:14.780 LINK simple_copy 00:02:14.780 LINK mkfs 00:02:14.780 LINK reset 00:02:14.780 LINK nvme_dp 00:02:14.780 LINK sgl 00:02:14.780 LINK fdp 00:02:14.780 LINK aer 00:02:15.038 LINK err_injection 00:02:15.038 LINK overhead 00:02:15.038 LINK nvme_compliance 00:02:15.038 LINK dif 00:02:15.301 CC examples/accel/perf/accel_perf.o 00:02:15.301 CC examples/blob/cli/blobcli.o 00:02:15.301 CC examples/blob/hello_world/hello_blob.o 00:02:15.560 LINK hello_blob 00:02:15.560 LINK cuse 00:02:15.560 LINK accel_perf 00:02:15.817 LINK blobcli 00:02:16.384 CC examples/bdev/hello_world/hello_bdev.o 00:02:16.384 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.642 LINK hello_bdev 00:02:16.900 CC test/bdev/bdevio/bdevio.o 00:02:16.900 LINK bdevperf 00:02:17.158 LINK bdevio 00:02:18.093 LINK esnap 00:02:18.660 CC examples/nvmf/nvmf/nvmf.o 00:02:18.660 LINK nvmf 00:02:20.033 00:02:20.033 real 0m48.449s 00:02:20.033 user 6m13.099s 00:02:20.033 sys 2m28.786s 00:02:20.033 13:47:58 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:20.033 13:47:58 make -- common/autotest_common.sh@10 -- $ set +x 00:02:20.033 ************************************ 00:02:20.033 END TEST make 00:02:20.033 ************************************ 00:02:20.290 13:47:58 -- common/autotest_common.sh@1142 -- $ return 0 00:02:20.290 13:47:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:20.290 13:47:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:20.290 13:47:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:20.290 13:47:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.290 13:47:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:20.290 13:47:58 -- pm/common@44 -- $ pid=2723641 00:02:20.290 13:47:58 -- pm/common@50 -- $ kill -TERM 2723641 00:02:20.290 13:47:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.290 13:47:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:20.290 13:47:58 -- pm/common@44 -- $ pid=2723643 00:02:20.290 13:47:58 -- pm/common@50 -- $ kill -TERM 2723643 00:02:20.290 13:47:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.290 13:47:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:20.290 13:47:58 -- pm/common@44 -- $ pid=2723645 00:02:20.290 13:47:58 -- pm/common@50 -- $ kill -TERM 2723645 00:02:20.290 13:47:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.290 13:47:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:20.290 13:47:58 -- pm/common@44 -- $ pid=2723668 00:02:20.290 13:47:58 -- pm/common@50 -- $ sudo -E kill -TERM 2723668 00:02:20.290 13:47:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:20.290 13:47:58 -- nvmf/common.sh@7 -- # uname -s 00:02:20.290 13:47:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:20.290 13:47:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:20.290 13:47:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:20.290 13:47:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:20.290 13:47:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:20.290 13:47:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:20.290 13:47:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:20.290 13:47:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:20.290 13:47:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:20.290 13:47:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:20.290 13:47:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:02:20.290 13:47:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:02:20.290 13:47:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:20.290 13:47:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:20.290 13:47:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:20.290 13:47:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:20.290 13:47:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:20.290 13:47:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:20.290 13:47:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.290 13:47:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.291 13:47:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.291 13:47:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.291 13:47:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.291 13:47:58 -- paths/export.sh@5 -- # export PATH 00:02:20.291 13:47:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.291 13:47:58 -- nvmf/common.sh@47 -- # : 0 00:02:20.291 13:47:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:20.291 13:47:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:20.291 13:47:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:20.291 13:47:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:20.291 13:47:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:20.291 13:47:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:20.291 13:47:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:20.291 13:47:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:20.291 13:47:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:20.291 13:47:58 -- spdk/autotest.sh@32 -- # uname -s 00:02:20.291 13:47:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:20.291 13:47:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:20.291 13:47:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:20.291 13:47:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:20.291 13:47:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:20.291 13:47:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:20.291 13:47:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:20.291 13:47:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:20.291 13:47:58 -- spdk/autotest.sh@48 -- # udevadm_pid=2782382 00:02:20.291 13:47:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:20.291 13:47:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:20.291 13:47:58 -- pm/common@17 -- # local monitor 00:02:20.291 13:47:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.291 13:47:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.291 13:47:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.291 13:47:58 -- pm/common@21 -- # date +%s 00:02:20.291 13:47:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.291 13:47:58 -- pm/common@21 -- # date +%s 00:02:20.291 13:47:58 -- pm/common@25 -- # sleep 1 00:02:20.291 13:47:58 -- pm/common@21 -- # date +%s 00:02:20.291 13:47:58 -- pm/common@21 -- # date +%s 00:02:20.291 13:47:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044078 00:02:20.291 13:47:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044078 00:02:20.291 13:47:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044078 00:02:20.291 13:47:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721044078 00:02:20.549 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044078_collect-vmstat.pm.log 00:02:20.549 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044078_collect-cpu-load.pm.log 00:02:20.549 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044078_collect-cpu-temp.pm.log 00:02:20.549 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721044078_collect-bmc-pm.bmc.pm.log 00:02:21.486 13:47:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:21.486 13:47:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:21.486 13:47:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:21.486 13:47:59 -- common/autotest_common.sh@10 -- # set +x 00:02:21.486 13:47:59 -- spdk/autotest.sh@59 -- # create_test_list 00:02:21.487 13:47:59 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:21.487 13:47:59 -- common/autotest_common.sh@10 -- # set +x 00:02:21.487 13:47:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:21.487 13:47:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:21.487 13:47:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:21.487 13:47:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:21.487 13:47:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:21.487 13:47:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:21.487 13:47:59 -- common/autotest_common.sh@1455 -- # uname 00:02:21.487 13:47:59 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:21.487 13:47:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:21.487 13:47:59 -- common/autotest_common.sh@1475 -- # uname 00:02:21.487 13:47:59 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:21.487 13:47:59 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:21.487 13:47:59 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:21.487 13:47:59 -- spdk/autotest.sh@72 -- # hash lcov 00:02:21.487 13:47:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:21.487 13:47:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:21.487 13:47:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:21.487 13:47:59 -- common/autotest_common.sh@10 -- # set +x 00:02:21.487 13:47:59 -- spdk/autotest.sh@91 -- # rm -f 00:02:21.487 13:47:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.674 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:02:25.674 0000:af:00.0 (8086 2701): Already using the nvme driver 00:02:25.674 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:b0:00.0 (8086 2701): Already using the nvme driver 00:02:25.674 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:25.674 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:25.674 13:48:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:25.674 13:48:03 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:25.674 13:48:03 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:25.674 13:48:03 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:25.674 13:48:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:25.674 13:48:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:25.674 13:48:03 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:25.674 13:48:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:25.674 13:48:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:25.674 13:48:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:25.674 13:48:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:25.674 13:48:03 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:25.674 13:48:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:25.674 13:48:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:25.674 13:48:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:25.674 13:48:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:25.674 13:48:03 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:25.674 13:48:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:25.674 13:48:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:25.674 13:48:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:25.674 13:48:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:25.674 13:48:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:25.674 13:48:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:25.674 13:48:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:25.674 13:48:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:25.674 No valid GPT data, bailing 00:02:25.674 13:48:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:25.674 13:48:03 -- scripts/common.sh@391 -- # pt= 00:02:25.674 13:48:03 -- scripts/common.sh@392 -- # return 1 00:02:25.674 13:48:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:25.674 1+0 records in 00:02:25.674 1+0 records out 00:02:25.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00167592 s, 626 MB/s 00:02:25.674 13:48:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:25.674 13:48:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:25.674 13:48:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:25.674 13:48:03 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:25.674 13:48:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:25.933 No valid GPT data, bailing 00:02:25.933 13:48:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:25.933 13:48:03 -- scripts/common.sh@391 -- # pt= 00:02:25.933 13:48:03 -- scripts/common.sh@392 -- # return 1 00:02:25.933 13:48:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:25.933 1+0 records in 00:02:25.933 1+0 records out 00:02:25.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382366 s, 274 MB/s 00:02:25.933 13:48:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:25.933 13:48:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:25.933 13:48:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:25.933 13:48:03 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:25.933 13:48:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:25.933 No valid GPT data, bailing 00:02:25.933 13:48:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:25.933 13:48:03 -- scripts/common.sh@391 -- # pt= 00:02:25.933 13:48:03 -- scripts/common.sh@392 -- # return 1 00:02:25.933 13:48:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:25.933 1+0 records in 00:02:25.933 1+0 records out 00:02:25.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421918 s, 249 MB/s 00:02:25.933 13:48:03 -- spdk/autotest.sh@118 -- # sync 00:02:25.933 13:48:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:25.933 13:48:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:25.933 13:48:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:31.205 13:48:09 -- spdk/autotest.sh@124 -- # uname -s 00:02:31.205 13:48:09 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:31.205 13:48:09 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:31.205 13:48:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:31.205 13:48:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:31.205 13:48:09 -- common/autotest_common.sh@10 -- # set +x 00:02:31.205 ************************************ 00:02:31.205 START TEST setup.sh 00:02:31.205 ************************************ 00:02:31.205 13:48:09 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:31.205 * Looking for test storage... 00:02:31.205 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:31.205 13:48:09 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:31.205 13:48:09 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:31.205 13:48:09 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:31.205 13:48:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:31.205 13:48:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:31.205 13:48:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:31.205 ************************************ 00:02:31.205 START TEST acl 00:02:31.205 ************************************ 00:02:31.205 13:48:09 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:31.464 * Looking for test storage... 00:02:31.464 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:31.464 13:48:09 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:31.464 13:48:09 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:31.464 13:48:09 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:31.464 13:48:09 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:31.464 13:48:09 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:31.464 13:48:09 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:31.464 13:48:09 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:31.464 13:48:09 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.464 13:48:09 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.656 13:48:13 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:35.656 13:48:13 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:35.656 13:48:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.656 13:48:13 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:35.656 13:48:13 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.656 13:48:13 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:39.849 Hugepages 00:02:39.849 node hugesize free / total 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.849 00:02:39.849 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:39.849 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:af:00.0 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:b0:00.0 == *:*:*.* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:02:39.850 13:48:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:39.850 13:48:17 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:39.850 13:48:17 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.850 13:48:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:39.850 ************************************ 00:02:39.850 START TEST denied 00:02:39.850 ************************************ 00:02:39.850 13:48:17 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:39.850 13:48:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:39.850 13:48:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:39.850 13:48:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:39.850 13:48:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.850 13:48:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:44.042 0000:5e:00.0 (144d a80a): Skipping denied controller at 0000:5e:00.0 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.042 13:48:21 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.312 00:02:49.312 real 0m9.248s 00:02:49.312 user 0m2.921s 00:02:49.312 sys 0m5.514s 00:02:49.312 13:48:26 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:49.312 13:48:26 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:49.312 ************************************ 00:02:49.312 END TEST denied 00:02:49.312 ************************************ 00:02:49.312 13:48:26 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:49.312 13:48:26 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:49.312 13:48:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.312 13:48:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.312 13:48:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:49.312 ************************************ 00:02:49.312 START TEST allowed 00:02:49.312 ************************************ 00:02:49.312 13:48:27 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:49.312 13:48:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:49.312 13:48:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:49.312 13:48:27 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:49.312 13:48:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.312 13:48:27 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:54.587 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:af:00.0 0000:b0:00.0 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:af:00.0 ]] 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/driver 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:b0:00.0 ]] 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:b0:00.0/driver 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.587 13:48:32 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.864 00:02:59.864 real 0m9.851s 00:02:59.864 user 0m2.900s 00:02:59.864 sys 0m5.385s 00:02:59.864 13:48:36 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.864 13:48:36 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:59.864 ************************************ 00:02:59.864 END TEST allowed 00:02:59.864 ************************************ 00:02:59.864 13:48:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:59.864 00:02:59.864 real 0m27.659s 00:02:59.864 user 0m8.926s 00:02:59.864 sys 0m16.668s 00:02:59.864 13:48:36 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.864 13:48:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.864 ************************************ 00:02:59.864 END TEST acl 00:02:59.864 ************************************ 00:02:59.864 13:48:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:59.864 13:48:36 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.864 13:48:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.864 13:48:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.864 13:48:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.864 ************************************ 00:02:59.864 START TEST hugepages 00:02:59.864 ************************************ 00:02:59.864 13:48:37 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.864 * Looking for test storage... 00:02:59.864 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 37934708 kB' 'MemAvailable: 41518032 kB' 'Buffers: 2704 kB' 'Cached: 15222084 kB' 'SwapCached: 0 kB' 'Active: 12376112 kB' 'Inactive: 3465204 kB' 'Active(anon): 11864584 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619836 kB' 'Mapped: 203096 kB' 'Shmem: 11248056 kB' 'KReclaimable: 207068 kB' 'Slab: 628996 kB' 'SReclaimable: 207068 kB' 'SUnreclaim: 421928 kB' 'KernelStack: 16480 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439180 kB' 'Committed_AS: 13230512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203476 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.864 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.865 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:59.866 13:48:37 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:59.866 13:48:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.866 13:48:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.866 13:48:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.866 ************************************ 00:02:59.866 START TEST default_setup 00:02:59.866 ************************************ 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.866 13:48:37 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:03.225 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:03:03.225 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:03.225 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:03:03.225 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40145332 kB' 'MemAvailable: 43728328 kB' 'Buffers: 2704 kB' 'Cached: 15222180 kB' 'SwapCached: 0 kB' 'Active: 12394292 kB' 'Inactive: 3465204 kB' 'Active(anon): 11882764 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637448 kB' 'Mapped: 203604 kB' 'Shmem: 11248152 kB' 'KReclaimable: 206412 kB' 'Slab: 626464 kB' 'SReclaimable: 206412 kB' 'SUnreclaim: 420052 kB' 'KernelStack: 16816 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13252960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203460 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.225 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.226 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40144232 kB' 'MemAvailable: 43727196 kB' 'Buffers: 2704 kB' 'Cached: 15222180 kB' 'SwapCached: 0 kB' 'Active: 12393580 kB' 'Inactive: 3465204 kB' 'Active(anon): 11882052 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637608 kB' 'Mapped: 203428 kB' 'Shmem: 11248152 kB' 'KReclaimable: 206348 kB' 'Slab: 626392 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420044 kB' 'KernelStack: 16576 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13252976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203492 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.227 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40142708 kB' 'MemAvailable: 43725672 kB' 'Buffers: 2704 kB' 'Cached: 15222200 kB' 'SwapCached: 0 kB' 'Active: 12393708 kB' 'Inactive: 3465204 kB' 'Active(anon): 11882180 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637272 kB' 'Mapped: 203428 kB' 'Shmem: 11248172 kB' 'KReclaimable: 206348 kB' 'Slab: 626352 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420004 kB' 'KernelStack: 16720 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13251640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203604 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.228 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.229 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.230 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.492 nr_hugepages=1024 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.492 resv_hugepages=0 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.492 surplus_hugepages=0 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.492 anon_hugepages=0 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40142144 kB' 'MemAvailable: 43725108 kB' 'Buffers: 2704 kB' 'Cached: 15222220 kB' 'SwapCached: 0 kB' 'Active: 12396096 kB' 'Inactive: 3465204 kB' 'Active(anon): 11884568 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639624 kB' 'Mapped: 203932 kB' 'Shmem: 11248192 kB' 'KReclaimable: 206348 kB' 'Slab: 626352 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420004 kB' 'KernelStack: 16720 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13255224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.492 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.493 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 22432152 kB' 'MemUsed: 10154760 kB' 'SwapCached: 0 kB' 'Active: 6441964 kB' 'Inactive: 207432 kB' 'Active(anon): 6254160 kB' 'Inactive(anon): 0 kB' 'Active(file): 187804 kB' 'Inactive(file): 207432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6202996 kB' 'Mapped: 70012 kB' 'AnonPages: 449524 kB' 'Shmem: 5807760 kB' 'KernelStack: 8584 kB' 'PageTables: 4860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105336 kB' 'Slab: 337852 kB' 'SReclaimable: 105336 kB' 'SUnreclaim: 232516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.494 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:03.495 node0=1024 expecting 1024 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:03.495 00:03:03.495 real 0m4.146s 00:03:03.495 user 0m1.582s 00:03:03.495 sys 0m2.636s 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.495 13:48:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:03.495 ************************************ 00:03:03.495 END TEST default_setup 00:03:03.495 ************************************ 00:03:03.495 13:48:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:03.495 13:48:41 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:03.495 13:48:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.495 13:48:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.495 13:48:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.495 ************************************ 00:03:03.495 START TEST per_node_1G_alloc 00:03:03.495 ************************************ 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.495 13:48:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:07.694 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:07.694 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:07.694 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:07.694 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.694 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40157660 kB' 'MemAvailable: 43740624 kB' 'Buffers: 2704 kB' 'Cached: 15222324 kB' 'SwapCached: 0 kB' 'Active: 12391052 kB' 'Inactive: 3465204 kB' 'Active(anon): 11879524 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634400 kB' 'Mapped: 202272 kB' 'Shmem: 11248296 kB' 'KReclaimable: 206348 kB' 'Slab: 626412 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420064 kB' 'KernelStack: 16608 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13237788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203572 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.694 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.695 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40158060 kB' 'MemAvailable: 43741024 kB' 'Buffers: 2704 kB' 'Cached: 15222328 kB' 'SwapCached: 0 kB' 'Active: 12390988 kB' 'Inactive: 3465204 kB' 'Active(anon): 11879460 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634444 kB' 'Mapped: 202264 kB' 'Shmem: 11248300 kB' 'KReclaimable: 206348 kB' 'Slab: 626428 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420080 kB' 'KernelStack: 16592 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13239168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203540 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.696 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40158088 kB' 'MemAvailable: 43741052 kB' 'Buffers: 2704 kB' 'Cached: 15222328 kB' 'SwapCached: 0 kB' 'Active: 12390460 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878932 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633932 kB' 'Mapped: 202264 kB' 'Shmem: 11248300 kB' 'KReclaimable: 206348 kB' 'Slab: 626428 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420080 kB' 'KernelStack: 16640 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13237976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203556 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.697 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.698 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.699 nr_hugepages=1024 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.699 resv_hugepages=0 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.699 surplus_hugepages=0 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.699 anon_hugepages=0 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40158132 kB' 'MemAvailable: 43741096 kB' 'Buffers: 2704 kB' 'Cached: 15222384 kB' 'SwapCached: 0 kB' 'Active: 12390392 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878864 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633772 kB' 'Mapped: 202264 kB' 'Shmem: 11248356 kB' 'KReclaimable: 206348 kB' 'Slab: 626428 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420080 kB' 'KernelStack: 16512 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13239212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203588 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.699 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.700 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23499776 kB' 'MemUsed: 9087136 kB' 'SwapCached: 0 kB' 'Active: 6441636 kB' 'Inactive: 207432 kB' 'Active(anon): 6253832 kB' 'Inactive(anon): 0 kB' 'Active(file): 187804 kB' 'Inactive(file): 207432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203028 kB' 'Mapped: 69272 kB' 'AnonPages: 449200 kB' 'Shmem: 5807792 kB' 'KernelStack: 8728 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105336 kB' 'Slab: 337792 kB' 'SReclaimable: 105336 kB' 'SUnreclaim: 232456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.701 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 16659324 kB' 'MemUsed: 11049224 kB' 'SwapCached: 0 kB' 'Active: 5949796 kB' 'Inactive: 3257772 kB' 'Active(anon): 5626072 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3257772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9022060 kB' 'Mapped: 132992 kB' 'AnonPages: 185612 kB' 'Shmem: 5440564 kB' 'KernelStack: 7976 kB' 'PageTables: 3528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101012 kB' 'Slab: 288636 kB' 'SReclaimable: 101012 kB' 'SUnreclaim: 187624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.702 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.703 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:07.704 node0=512 expecting 512 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:07.704 node1=512 expecting 512 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:07.704 00:03:07.704 real 0m3.786s 00:03:07.704 user 0m1.406s 00:03:07.704 sys 0m2.424s 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.704 13:48:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:07.704 ************************************ 00:03:07.704 END TEST per_node_1G_alloc 00:03:07.704 ************************************ 00:03:07.704 13:48:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:07.704 13:48:45 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:07.704 13:48:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.704 13:48:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.704 13:48:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.704 ************************************ 00:03:07.704 START TEST even_2G_alloc 00:03:07.704 ************************************ 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.704 13:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:10.994 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:10.994 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:10.994 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:10.994 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.994 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40162160 kB' 'MemAvailable: 43745124 kB' 'Buffers: 2704 kB' 'Cached: 15222480 kB' 'SwapCached: 0 kB' 'Active: 12391224 kB' 'Inactive: 3465204 kB' 'Active(anon): 11879696 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634448 kB' 'Mapped: 202304 kB' 'Shmem: 11248452 kB' 'KReclaimable: 206348 kB' 'Slab: 626232 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419884 kB' 'KernelStack: 16624 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13237504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.994 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.995 13:48:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40162528 kB' 'MemAvailable: 43745492 kB' 'Buffers: 2704 kB' 'Cached: 15222484 kB' 'SwapCached: 0 kB' 'Active: 12390976 kB' 'Inactive: 3465204 kB' 'Active(anon): 11879448 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633804 kB' 'Mapped: 202276 kB' 'Shmem: 11248456 kB' 'KReclaimable: 206348 kB' 'Slab: 626288 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419940 kB' 'KernelStack: 16608 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13237524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.995 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40162528 kB' 'MemAvailable: 43745492 kB' 'Buffers: 2704 kB' 'Cached: 15222500 kB' 'SwapCached: 0 kB' 'Active: 12390992 kB' 'Inactive: 3465204 kB' 'Active(anon): 11879464 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634208 kB' 'Mapped: 202276 kB' 'Shmem: 11248472 kB' 'KReclaimable: 206348 kB' 'Slab: 626288 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419940 kB' 'KernelStack: 16624 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13237544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.998 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.999 nr_hugepages=1024 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.999 resv_hugepages=0 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.999 surplus_hugepages=0 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.999 anon_hugepages=0 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.999 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40162528 kB' 'MemAvailable: 43745492 kB' 'Buffers: 2704 kB' 'Cached: 15222540 kB' 'SwapCached: 0 kB' 'Active: 12391024 kB' 'Inactive: 3465204 kB' 'Active(anon): 11879496 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634168 kB' 'Mapped: 202276 kB' 'Shmem: 11248512 kB' 'KReclaimable: 206348 kB' 'Slab: 626288 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419940 kB' 'KernelStack: 16624 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13237564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.263 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23499876 kB' 'MemUsed: 9087036 kB' 'SwapCached: 0 kB' 'Active: 6440460 kB' 'Inactive: 207432 kB' 'Active(anon): 6252656 kB' 'Inactive(anon): 0 kB' 'Active(file): 187804 kB' 'Inactive(file): 207432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203052 kB' 'Mapped: 69284 kB' 'AnonPages: 447944 kB' 'Shmem: 5807816 kB' 'KernelStack: 8552 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105336 kB' 'Slab: 337852 kB' 'SReclaimable: 105336 kB' 'SUnreclaim: 232516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.264 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.265 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 16663308 kB' 'MemUsed: 11045240 kB' 'SwapCached: 0 kB' 'Active: 5950584 kB' 'Inactive: 3257772 kB' 'Active(anon): 5626860 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3257772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9022216 kB' 'Mapped: 132992 kB' 'AnonPages: 186252 kB' 'Shmem: 5440720 kB' 'KernelStack: 8072 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101012 kB' 'Slab: 288436 kB' 'SReclaimable: 101012 kB' 'SUnreclaim: 187424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.266 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:11.267 node0=512 expecting 512 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:11.267 node1=512 expecting 512 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:11.267 00:03:11.267 real 0m3.821s 00:03:11.267 user 0m1.415s 00:03:11.267 sys 0m2.473s 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.267 13:48:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.267 ************************************ 00:03:11.267 END TEST even_2G_alloc 00:03:11.267 ************************************ 00:03:11.267 13:48:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.267 13:48:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:11.267 13:48:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.267 13:48:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.267 13:48:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.267 ************************************ 00:03:11.267 START TEST odd_alloc 00:03:11.267 ************************************ 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.267 13:48:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:15.462 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:15.462 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:15.462 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:15.462 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.462 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40168172 kB' 'MemAvailable: 43751136 kB' 'Buffers: 2704 kB' 'Cached: 15222636 kB' 'SwapCached: 0 kB' 'Active: 12387884 kB' 'Inactive: 3465204 kB' 'Active(anon): 11876356 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630536 kB' 'Mapped: 202416 kB' 'Shmem: 11248608 kB' 'KReclaimable: 206348 kB' 'Slab: 626268 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419920 kB' 'KernelStack: 16704 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13238184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203684 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.462 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40169080 kB' 'MemAvailable: 43752044 kB' 'Buffers: 2704 kB' 'Cached: 15222652 kB' 'SwapCached: 0 kB' 'Active: 12387196 kB' 'Inactive: 3465204 kB' 'Active(anon): 11875668 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 629808 kB' 'Mapped: 202368 kB' 'Shmem: 11248624 kB' 'KReclaimable: 206348 kB' 'Slab: 626260 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419912 kB' 'KernelStack: 16656 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13238200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.463 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.464 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40169792 kB' 'MemAvailable: 43752756 kB' 'Buffers: 2704 kB' 'Cached: 15222656 kB' 'SwapCached: 0 kB' 'Active: 12387604 kB' 'Inactive: 3465204 kB' 'Active(anon): 11876076 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630220 kB' 'Mapped: 202368 kB' 'Shmem: 11248628 kB' 'KReclaimable: 206348 kB' 'Slab: 626260 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419912 kB' 'KernelStack: 16672 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13238224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.465 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.466 13:48:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:15.466 nr_hugepages=1025 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.466 resv_hugepages=0 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.466 surplus_hugepages=0 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.466 anon_hugepages=0 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40170252 kB' 'MemAvailable: 43753216 kB' 'Buffers: 2704 kB' 'Cached: 15222672 kB' 'SwapCached: 0 kB' 'Active: 12387556 kB' 'Inactive: 3465204 kB' 'Active(anon): 11876028 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630132 kB' 'Mapped: 202368 kB' 'Shmem: 11248644 kB' 'KReclaimable: 206348 kB' 'Slab: 626260 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419912 kB' 'KernelStack: 16656 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486732 kB' 'Committed_AS: 13238244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.466 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.467 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23505272 kB' 'MemUsed: 9081640 kB' 'SwapCached: 0 kB' 'Active: 6438128 kB' 'Inactive: 207432 kB' 'Active(anon): 6250324 kB' 'Inactive(anon): 0 kB' 'Active(file): 187804 kB' 'Inactive(file): 207432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203064 kB' 'Mapped: 69300 kB' 'AnonPages: 445596 kB' 'Shmem: 5807828 kB' 'KernelStack: 8616 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105336 kB' 'Slab: 337772 kB' 'SReclaimable: 105336 kB' 'SUnreclaim: 232436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 16665616 kB' 'MemUsed: 11042932 kB' 'SwapCached: 0 kB' 'Active: 5949480 kB' 'Inactive: 3257772 kB' 'Active(anon): 5625756 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3257772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9022356 kB' 'Mapped: 133068 kB' 'AnonPages: 184608 kB' 'Shmem: 5440860 kB' 'KernelStack: 8056 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101012 kB' 'Slab: 288488 kB' 'SReclaimable: 101012 kB' 'SUnreclaim: 187476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.468 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:15.469 node0=512 expecting 513 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:15.469 node1=513 expecting 512 00:03:15.469 13:48:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:15.469 00:03:15.469 real 0m3.857s 00:03:15.469 user 0m1.411s 00:03:15.469 sys 0m2.511s 00:03:15.470 13:48:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.470 13:48:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.470 ************************************ 00:03:15.470 END TEST odd_alloc 00:03:15.470 ************************************ 00:03:15.470 13:48:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.470 13:48:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:15.470 13:48:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.470 13:48:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.470 13:48:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.470 ************************************ 00:03:15.470 START TEST custom_alloc 00:03:15.470 ************************************ 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.470 13:48:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:18.764 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:18.764 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:18.764 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:18.764 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.764 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39122260 kB' 'MemAvailable: 42705224 kB' 'Buffers: 2704 kB' 'Cached: 15222784 kB' 'SwapCached: 0 kB' 'Active: 12388404 kB' 'Inactive: 3465204 kB' 'Active(anon): 11876876 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630868 kB' 'Mapped: 202384 kB' 'Shmem: 11248756 kB' 'KReclaimable: 206348 kB' 'Slab: 626560 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420212 kB' 'KernelStack: 16704 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13238588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203700 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.026 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.027 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39124748 kB' 'MemAvailable: 42707712 kB' 'Buffers: 2704 kB' 'Cached: 15222788 kB' 'SwapCached: 0 kB' 'Active: 12388128 kB' 'Inactive: 3465204 kB' 'Active(anon): 11876600 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630580 kB' 'Mapped: 202384 kB' 'Shmem: 11248760 kB' 'KReclaimable: 206348 kB' 'Slab: 626560 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420212 kB' 'KernelStack: 16688 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13238604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.028 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.029 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39124748 kB' 'MemAvailable: 42707712 kB' 'Buffers: 2704 kB' 'Cached: 15222804 kB' 'SwapCached: 0 kB' 'Active: 12388148 kB' 'Inactive: 3465204 kB' 'Active(anon): 11876620 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630560 kB' 'Mapped: 202384 kB' 'Shmem: 11248776 kB' 'KReclaimable: 206348 kB' 'Slab: 626560 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420212 kB' 'KernelStack: 16672 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13238628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203652 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.030 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:19.031 nr_hugepages=1536 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.031 resv_hugepages=0 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.031 surplus_hugepages=0 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.031 anon_hugepages=0 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.031 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 39124500 kB' 'MemAvailable: 42707464 kB' 'Buffers: 2704 kB' 'Cached: 15222824 kB' 'SwapCached: 0 kB' 'Active: 12387748 kB' 'Inactive: 3465204 kB' 'Active(anon): 11876220 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630140 kB' 'Mapped: 202384 kB' 'Shmem: 11248796 kB' 'KReclaimable: 206348 kB' 'Slab: 626560 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420212 kB' 'KernelStack: 16672 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963468 kB' 'Committed_AS: 13238648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203668 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.032 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 23499488 kB' 'MemUsed: 9087424 kB' 'SwapCached: 0 kB' 'Active: 6437576 kB' 'Inactive: 207432 kB' 'Active(anon): 6249772 kB' 'Inactive(anon): 0 kB' 'Active(file): 187804 kB' 'Inactive(file): 207432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203068 kB' 'Mapped: 69316 kB' 'AnonPages: 445012 kB' 'Shmem: 5807832 kB' 'KernelStack: 8584 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105336 kB' 'Slab: 337772 kB' 'SReclaimable: 105336 kB' 'SUnreclaim: 232436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.033 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.034 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708548 kB' 'MemFree: 15624760 kB' 'MemUsed: 12083788 kB' 'SwapCached: 0 kB' 'Active: 5950660 kB' 'Inactive: 3257772 kB' 'Active(anon): 5626936 kB' 'Inactive(anon): 0 kB' 'Active(file): 323724 kB' 'Inactive(file): 3257772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9022504 kB' 'Mapped: 133068 kB' 'AnonPages: 185576 kB' 'Shmem: 5441008 kB' 'KernelStack: 8104 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101012 kB' 'Slab: 288788 kB' 'SReclaimable: 101012 kB' 'SUnreclaim: 187776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.035 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.036 node0=512 expecting 512 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:19.036 node1=1024 expecting 1024 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:19.036 00:03:19.036 real 0m3.862s 00:03:19.036 user 0m1.487s 00:03:19.036 sys 0m2.437s 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.036 13:48:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.036 ************************************ 00:03:19.036 END TEST custom_alloc 00:03:19.036 ************************************ 00:03:19.036 13:48:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:19.036 13:48:57 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:19.036 13:48:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.036 13:48:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.036 13:48:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.295 ************************************ 00:03:19.295 START TEST no_shrink_alloc 00:03:19.295 ************************************ 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.295 13:48:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:22.599 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:22.599 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:22.599 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:22.599 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.599 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40205472 kB' 'MemAvailable: 43788436 kB' 'Buffers: 2704 kB' 'Cached: 15222936 kB' 'SwapCached: 0 kB' 'Active: 12390488 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878960 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632788 kB' 'Mapped: 202948 kB' 'Shmem: 11248908 kB' 'KReclaimable: 206348 kB' 'Slab: 626732 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420384 kB' 'KernelStack: 16864 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13241992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203892 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.863 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.864 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40206252 kB' 'MemAvailable: 43789216 kB' 'Buffers: 2704 kB' 'Cached: 15222940 kB' 'SwapCached: 0 kB' 'Active: 12389996 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878468 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632900 kB' 'Mapped: 202496 kB' 'Shmem: 11248912 kB' 'KReclaimable: 206348 kB' 'Slab: 626752 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420404 kB' 'KernelStack: 16944 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13242520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203780 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.865 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40206248 kB' 'MemAvailable: 43789212 kB' 'Buffers: 2704 kB' 'Cached: 15222960 kB' 'SwapCached: 0 kB' 'Active: 12389048 kB' 'Inactive: 3465204 kB' 'Active(anon): 11877520 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631916 kB' 'Mapped: 202504 kB' 'Shmem: 11248932 kB' 'KReclaimable: 206348 kB' 'Slab: 626752 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420404 kB' 'KernelStack: 16736 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13239556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203684 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.866 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.867 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.868 nr_hugepages=1024 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.868 resv_hugepages=0 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.868 surplus_hugepages=0 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.868 anon_hugepages=0 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40206444 kB' 'MemAvailable: 43789408 kB' 'Buffers: 2704 kB' 'Cached: 15223000 kB' 'SwapCached: 0 kB' 'Active: 12388680 kB' 'Inactive: 3465204 kB' 'Active(anon): 11877152 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631496 kB' 'Mapped: 202380 kB' 'Shmem: 11248972 kB' 'KReclaimable: 206348 kB' 'Slab: 626864 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 420516 kB' 'KernelStack: 16736 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13239576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203684 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.868 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.869 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.131 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 22471356 kB' 'MemUsed: 10115556 kB' 'SwapCached: 0 kB' 'Active: 6438648 kB' 'Inactive: 207432 kB' 'Active(anon): 6250844 kB' 'Inactive(anon): 0 kB' 'Active(file): 187804 kB' 'Inactive(file): 207432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203072 kB' 'Mapped: 69328 kB' 'AnonPages: 446228 kB' 'Shmem: 5807836 kB' 'KernelStack: 8712 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105336 kB' 'Slab: 337976 kB' 'SReclaimable: 105336 kB' 'SUnreclaim: 232640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.132 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.133 node0=1024 expecting 1024 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.133 13:49:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:26.424 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:26.424 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:26.424 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:26.424 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.424 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.688 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:26.688 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:26.688 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.688 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.688 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40214332 kB' 'MemAvailable: 43797296 kB' 'Buffers: 2704 kB' 'Cached: 15223068 kB' 'SwapCached: 0 kB' 'Active: 12390252 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878724 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632384 kB' 'Mapped: 202420 kB' 'Shmem: 11249040 kB' 'KReclaimable: 206348 kB' 'Slab: 626124 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419776 kB' 'KernelStack: 16688 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13240108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203748 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.689 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40215088 kB' 'MemAvailable: 43798052 kB' 'Buffers: 2704 kB' 'Cached: 15223072 kB' 'SwapCached: 0 kB' 'Active: 12389508 kB' 'Inactive: 3465204 kB' 'Active(anon): 11877980 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632152 kB' 'Mapped: 202328 kB' 'Shmem: 11249044 kB' 'KReclaimable: 206348 kB' 'Slab: 626132 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419784 kB' 'KernelStack: 16688 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13240124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203716 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40215316 kB' 'MemAvailable: 43798280 kB' 'Buffers: 2704 kB' 'Cached: 15223092 kB' 'SwapCached: 0 kB' 'Active: 12389540 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878012 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632148 kB' 'Mapped: 202328 kB' 'Shmem: 11249064 kB' 'KReclaimable: 206348 kB' 'Slab: 626132 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419784 kB' 'KernelStack: 16688 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13240148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203716 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.694 nr_hugepages=1024 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.694 resv_hugepages=0 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.694 surplus_hugepages=0 00:03:26.694 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.694 anon_hugepages=0 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295460 kB' 'MemFree: 40214832 kB' 'MemAvailable: 43797796 kB' 'Buffers: 2704 kB' 'Cached: 15223112 kB' 'SwapCached: 0 kB' 'Active: 12389648 kB' 'Inactive: 3465204 kB' 'Active(anon): 11878120 kB' 'Inactive(anon): 0 kB' 'Active(file): 511528 kB' 'Inactive(file): 3465204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632320 kB' 'Mapped: 202328 kB' 'Shmem: 11249084 kB' 'KReclaimable: 206348 kB' 'Slab: 626132 kB' 'SReclaimable: 206348 kB' 'SUnreclaim: 419784 kB' 'KernelStack: 16672 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487756 kB' 'Committed_AS: 13242768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 203684 kB' 'VmallocChunk: 0 kB' 'Percpu: 57280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1965376 kB' 'DirectMap2M: 22876160 kB' 'DirectMap1G: 44040192 kB' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.696 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32586912 kB' 'MemFree: 22459352 kB' 'MemUsed: 10127560 kB' 'SwapCached: 0 kB' 'Active: 6438164 kB' 'Inactive: 207432 kB' 'Active(anon): 6250360 kB' 'Inactive(anon): 0 kB' 'Active(file): 187804 kB' 'Inactive(file): 207432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203072 kB' 'Mapped: 69336 kB' 'AnonPages: 445680 kB' 'Shmem: 5807836 kB' 'KernelStack: 8568 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105336 kB' 'Slab: 337676 kB' 'SReclaimable: 105336 kB' 'SUnreclaim: 232340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.698 node0=1024 expecting 1024 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.698 00:03:26.698 real 0m7.589s 00:03:26.698 user 0m2.757s 00:03:26.698 sys 0m4.958s 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.698 13:49:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.698 ************************************ 00:03:26.698 END TEST no_shrink_alloc 00:03:26.698 ************************************ 00:03:26.958 13:49:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.958 13:49:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.958 00:03:26.958 real 0m27.775s 00:03:26.958 user 0m10.327s 00:03:26.958 sys 0m17.939s 00:03:26.958 13:49:04 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.958 13:49:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.958 ************************************ 00:03:26.958 END TEST hugepages 00:03:26.958 ************************************ 00:03:26.958 13:49:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:26.958 13:49:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:26.958 13:49:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.958 13:49:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.958 13:49:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.958 ************************************ 00:03:26.958 START TEST driver 00:03:26.958 ************************************ 00:03:26.958 13:49:04 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:26.958 * Looking for test storage... 00:03:26.958 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:26.958 13:49:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:26.958 13:49:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.958 13:49:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.232 13:49:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:32.232 13:49:10 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.232 13:49:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.232 13:49:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:32.491 ************************************ 00:03:32.491 START TEST guess_driver 00:03:32.491 ************************************ 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 167 > 0 )) 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:32.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:32.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:32.491 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:32.491 Looking for driver=vfio-pci 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.491 13:49:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.686 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.687 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.687 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.687 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:36.687 13:49:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:36.687 13:49:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.687 13:49:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.963 00:03:41.963 real 0m9.298s 00:03:41.963 user 0m2.888s 00:03:41.963 sys 0m5.626s 00:03:41.963 13:49:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.963 13:49:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.963 ************************************ 00:03:41.963 END TEST guess_driver 00:03:41.963 ************************************ 00:03:41.963 13:49:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:41.963 00:03:41.963 real 0m14.797s 00:03:41.963 user 0m4.399s 00:03:41.963 sys 0m8.723s 00:03:41.963 13:49:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.963 13:49:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.963 ************************************ 00:03:41.963 END TEST driver 00:03:41.963 ************************************ 00:03:41.963 13:49:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:41.964 13:49:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:41.964 13:49:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.964 13:49:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.964 13:49:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.964 ************************************ 00:03:41.964 START TEST devices 00:03:41.964 ************************************ 00:03:41.964 13:49:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:41.964 * Looking for test storage... 00:03:41.964 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:41.964 13:49:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:41.964 13:49:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:41.964 13:49:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.964 13:49:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:46.157 13:49:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:46.157 13:49:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:46.157 13:49:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:46.157 No valid GPT data, bailing 00:03:46.157 13:49:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:46.157 13:49:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:46.157 13:49:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:46.157 13:49:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:46.157 13:49:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:46.157 13:49:24 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:af:00.0 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:03:46.157 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:46.157 13:49:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:46.158 13:49:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:46.158 No valid GPT data, bailing 00:03:46.158 13:49:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:46.158 13:49:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:46.158 13:49:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:46.158 13:49:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:46.158 13:49:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:46.158 13:49:24 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:af:00.0 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:b0:00.0 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:03:46.158 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:46.158 13:49:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:03:46.158 13:49:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:03:46.417 No valid GPT data, bailing 00:03:46.417 13:49:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:46.417 13:49:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:46.417 13:49:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:46.417 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:46.417 13:49:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:46.417 13:49:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:46.417 13:49:24 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:03:46.417 13:49:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:03:46.417 13:49:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:46.417 13:49:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:b0:00.0 00:03:46.417 13:49:24 setup.sh.devices -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:03:46.417 13:49:24 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:46.417 13:49:24 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:46.417 13:49:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.417 13:49:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.417 13:49:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.417 ************************************ 00:03:46.417 START TEST nvme_mount 00:03:46.417 ************************************ 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.417 13:49:24 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:47.353 Creating new GPT entries in memory. 00:03:47.353 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.353 other utilities. 00:03:47.353 13:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.353 13:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.353 13:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.353 13:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.353 13:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.304 Creating new GPT entries in memory. 00:03:48.304 The operation has completed successfully. 00:03:48.304 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.304 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.304 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2810584 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.563 13:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.931 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.932 13:49:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.192 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.192 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.452 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:52.452 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:03:52.452 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.452 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.452 13:49:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.742 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.006 13:49:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.273 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:56.274 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.274 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.274 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:56.274 13:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.274 13:49:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.274 13:49:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.562 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.821 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.822 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.822 00:03:59.822 real 0m13.566s 00:03:59.822 user 0m4.039s 00:03:59.822 sys 0m7.529s 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.822 13:49:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.822 ************************************ 00:03:59.822 END TEST nvme_mount 00:03:59.822 ************************************ 00:04:00.080 13:49:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:00.080 13:49:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:00.080 13:49:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.080 13:49:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.080 13:49:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:00.080 ************************************ 00:04:00.080 START TEST dm_mount 00:04:00.080 ************************************ 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:00.080 13:49:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:01.015 Creating new GPT entries in memory. 00:04:01.015 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:01.015 other utilities. 00:04:01.016 13:49:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:01.016 13:49:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.016 13:49:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.016 13:49:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.016 13:49:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.949 Creating new GPT entries in memory. 00:04:01.949 The operation has completed successfully. 00:04:01.949 13:49:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.949 13:49:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.949 13:49:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.949 13:49:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.949 13:49:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:03.324 The operation has completed successfully. 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2814847 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:03.324 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.325 13:49:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.613 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.872 13:49:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:10.162 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.162 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:10.162 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.162 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.162 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.162 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.162 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.163 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.422 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:10.682 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:10.682 00:04:10.682 real 0m10.660s 00:04:10.682 user 0m2.803s 00:04:10.682 sys 0m4.980s 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.682 13:49:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.682 ************************************ 00:04:10.682 END TEST dm_mount 00:04:10.682 ************************************ 00:04:10.682 13:49:48 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:10.682 13:49:48 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.682 13:49:48 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.682 13:49:48 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.682 13:49:48 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.682 13:49:48 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.682 13:49:48 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.682 13:49:48 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.941 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.941 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.941 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.941 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.941 13:49:48 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.941 13:49:48 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.941 13:49:48 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.941 13:49:48 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.941 13:49:48 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.941 13:49:48 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.941 13:49:48 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.941 00:04:10.941 real 0m29.205s 00:04:10.941 user 0m8.554s 00:04:10.941 sys 0m15.716s 00:04:10.941 13:49:48 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.941 13:49:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.941 ************************************ 00:04:10.941 END TEST devices 00:04:10.941 ************************************ 00:04:10.941 13:49:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.941 00:04:10.941 real 1m39.908s 00:04:10.941 user 0m32.368s 00:04:10.941 sys 0m59.395s 00:04:10.941 13:49:49 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.941 13:49:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.941 ************************************ 00:04:10.941 END TEST setup.sh 00:04:10.941 ************************************ 00:04:11.200 13:49:49 -- common/autotest_common.sh@1142 -- # return 0 00:04:11.200 13:49:49 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:15.393 Hugepages 00:04:15.393 node hugesize free / total 00:04:15.393 node0 1048576kB 0 / 0 00:04:15.393 node0 2048kB 2048 / 2048 00:04:15.393 node1 1048576kB 0 / 0 00:04:15.393 node1 2048kB 0 / 0 00:04:15.393 00:04:15.393 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.393 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:15.393 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:15.393 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:15.393 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:15.393 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:15.393 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:15.393 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:15.393 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:15.393 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:15.393 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:15.393 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:15.393 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:15.393 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:15.393 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:15.393 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:15.393 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:15.393 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:15.393 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:04:15.393 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:04:15.393 13:49:53 -- spdk/autotest.sh@130 -- # uname -s 00:04:15.393 13:49:53 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:15.393 13:49:53 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:15.393 13:49:53 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:18.681 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:18.681 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.681 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.939 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.939 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.939 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.939 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.939 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:18.939 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.939 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.842 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:20.842 13:49:58 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:21.779 13:49:59 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:21.779 13:49:59 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:21.779 13:49:59 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:21.779 13:49:59 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:21.779 13:49:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:21.779 13:49:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:21.779 13:49:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.779 13:49:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:21.779 13:49:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:21.779 13:49:59 -- common/autotest_common.sh@1515 -- # (( 3 == 0 )) 00:04:21.779 13:49:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:21.779 13:49:59 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.969 Waiting for block devices as requested 00:04:25.969 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:04:25.969 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:04:25.969 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:25.969 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:25.969 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:25.969 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:25.969 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:25.969 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:25.969 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:26.228 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:26.228 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:04:26.487 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:26.487 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:26.487 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:26.745 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:26.745 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:26.745 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:27.005 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:27.005 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:27.005 13:50:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:27.005 13:50:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:27.005 13:50:05 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:27.005 13:50:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:27.005 13:50:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:27.005 13:50:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:27.005 13:50:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:27.005 13:50:05 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:27.005 13:50:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:27.005 13:50:05 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:27.005 13:50:05 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:27.005 13:50:05 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:27.005 13:50:05 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:27.005 13:50:05 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:27.005 13:50:05 -- common/autotest_common.sh@1557 -- # continue 00:04:27.005 13:50:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:27.005 13:50:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0 00:04:27.005 13:50:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:27.005 13:50:05 -- common/autotest_common.sh@1502 -- # grep 0000:af:00.0/nvme/nvme 00:04:27.005 13:50:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:27.005 13:50:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]] 00:04:27.005 13:50:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:27.288 13:50:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:27.288 13:50:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:27.288 13:50:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # oacs=' 0x7' 00:04:27.288 13:50:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:04:27.288 13:50:05 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:04:27.288 13:50:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:27.288 13:50:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0 00:04:27.288 13:50:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1502 -- # grep 0000:b0:00.0/nvme/nvme 00:04:27.288 13:50:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]] 00:04:27.288 13:50:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:27.288 13:50:05 -- common/autotest_common.sh@1545 -- # oacs=' 0x7' 00:04:27.288 13:50:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:04:27.288 13:50:05 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:04:27.288 13:50:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:27.288 13:50:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.288 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:04:27.288 13:50:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:27.288 13:50:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.288 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:04:27.288 13:50:05 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:31.478 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:31.478 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:31.478 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.478 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.479 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:31.479 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.479 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.479 13:50:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:31.479 13:50:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.479 13:50:09 -- common/autotest_common.sh@10 -- # set +x 00:04:31.479 13:50:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:31.479 13:50:09 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:31.479 13:50:09 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:31.479 13:50:09 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:31.479 13:50:09 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:31.479 13:50:09 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:31.479 13:50:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:31.479 13:50:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:31.479 13:50:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.479 13:50:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:31.479 13:50:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:31.479 13:50:09 -- common/autotest_common.sh@1515 -- # (( 3 == 0 )) 00:04:31.479 13:50:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:31.479 13:50:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:31.479 13:50:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:31.479 13:50:09 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:31.479 13:50:09 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:31.479 13:50:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:31.479 13:50:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:af:00.0/device 00:04:31.479 13:50:09 -- common/autotest_common.sh@1580 -- # device=0x2701 00:04:31.479 13:50:09 -- common/autotest_common.sh@1581 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:31.479 13:50:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:31.479 13:50:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device 00:04:31.479 13:50:09 -- common/autotest_common.sh@1580 -- # device=0x2701 00:04:31.479 13:50:09 -- common/autotest_common.sh@1581 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:31.479 13:50:09 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:31.479 13:50:09 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:31.479 13:50:09 -- common/autotest_common.sh@1593 -- # return 0 00:04:31.479 13:50:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:31.479 13:50:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:31.479 13:50:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:31.479 13:50:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:31.479 13:50:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:31.479 13:50:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.479 13:50:09 -- common/autotest_common.sh@10 -- # set +x 00:04:31.479 13:50:09 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:31.479 13:50:09 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:31.479 13:50:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.479 13:50:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.479 13:50:09 -- common/autotest_common.sh@10 -- # set +x 00:04:31.479 ************************************ 00:04:31.479 START TEST env 00:04:31.479 ************************************ 00:04:31.479 13:50:09 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:31.479 * Looking for test storage... 00:04:31.479 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:31.479 13:50:09 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:31.479 13:50:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.479 13:50:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.479 13:50:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.479 ************************************ 00:04:31.479 START TEST env_memory 00:04:31.479 ************************************ 00:04:31.479 13:50:09 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:31.479 00:04:31.479 00:04:31.479 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.479 http://cunit.sourceforge.net/ 00:04:31.479 00:04:31.479 00:04:31.479 Suite: memory 00:04:31.479 Test: alloc and free memory map ...[2024-07-15 13:50:09.545450] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:31.738 passed 00:04:31.738 Test: mem map translation ...[2024-07-15 13:50:09.559148] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:31.738 [2024-07-15 13:50:09.559167] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:31.738 [2024-07-15 13:50:09.559200] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:31.738 [2024-07-15 13:50:09.559210] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:31.738 passed 00:04:31.738 Test: mem map registration ...[2024-07-15 13:50:09.580780] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:31.738 [2024-07-15 13:50:09.580799] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:31.738 passed 00:04:31.738 Test: mem map adjacent registrations ...passed 00:04:31.738 00:04:31.738 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.738 suites 1 1 n/a 0 0 00:04:31.738 tests 4 4 4 0 0 00:04:31.738 asserts 152 152 152 0 n/a 00:04:31.738 00:04:31.738 Elapsed time = 0.088 seconds 00:04:31.738 00:04:31.738 real 0m0.102s 00:04:31.738 user 0m0.087s 00:04:31.738 sys 0m0.014s 00:04:31.738 13:50:09 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.738 13:50:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:31.738 ************************************ 00:04:31.738 END TEST env_memory 00:04:31.738 ************************************ 00:04:31.738 13:50:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.738 13:50:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.738 13:50:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.738 13:50:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.738 13:50:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.738 ************************************ 00:04:31.738 START TEST env_vtophys 00:04:31.738 ************************************ 00:04:31.738 13:50:09 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.738 EAL: lib.eal log level changed from notice to debug 00:04:31.738 EAL: Detected lcore 0 as core 0 on socket 0 00:04:31.738 EAL: Detected lcore 1 as core 1 on socket 0 00:04:31.738 EAL: Detected lcore 2 as core 2 on socket 0 00:04:31.738 EAL: Detected lcore 3 as core 3 on socket 0 00:04:31.738 EAL: Detected lcore 4 as core 4 on socket 0 00:04:31.738 EAL: Detected lcore 5 as core 8 on socket 0 00:04:31.738 EAL: Detected lcore 6 as core 9 on socket 0 00:04:31.738 EAL: Detected lcore 7 as core 10 on socket 0 00:04:31.738 EAL: Detected lcore 8 as core 11 on socket 0 00:04:31.738 EAL: Detected lcore 9 as core 16 on socket 0 00:04:31.738 EAL: Detected lcore 10 as core 17 on socket 0 00:04:31.738 EAL: Detected lcore 11 as core 18 on socket 0 00:04:31.738 EAL: Detected lcore 12 as core 19 on socket 0 00:04:31.738 EAL: Detected lcore 13 as core 20 on socket 0 00:04:31.738 EAL: Detected lcore 14 as core 24 on socket 0 00:04:31.738 EAL: Detected lcore 15 as core 25 on socket 0 00:04:31.738 EAL: Detected lcore 16 as core 26 on socket 0 00:04:31.738 EAL: Detected lcore 17 as core 27 on socket 0 00:04:31.738 EAL: Detected lcore 18 as core 0 on socket 1 00:04:31.738 EAL: Detected lcore 19 as core 1 on socket 1 00:04:31.738 EAL: Detected lcore 20 as core 2 on socket 1 00:04:31.738 EAL: Detected lcore 21 as core 3 on socket 1 00:04:31.738 EAL: Detected lcore 22 as core 4 on socket 1 00:04:31.738 EAL: Detected lcore 23 as core 8 on socket 1 00:04:31.738 EAL: Detected lcore 24 as core 9 on socket 1 00:04:31.738 EAL: Detected lcore 25 as core 10 on socket 1 00:04:31.738 EAL: Detected lcore 26 as core 11 on socket 1 00:04:31.738 EAL: Detected lcore 27 as core 16 on socket 1 00:04:31.738 EAL: Detected lcore 28 as core 17 on socket 1 00:04:31.738 EAL: Detected lcore 29 as core 18 on socket 1 00:04:31.738 EAL: Detected lcore 30 as core 19 on socket 1 00:04:31.738 EAL: Detected lcore 31 as core 20 on socket 1 00:04:31.738 EAL: Detected lcore 32 as core 24 on socket 1 00:04:31.738 EAL: Detected lcore 33 as core 25 on socket 1 00:04:31.738 EAL: Detected lcore 34 as core 26 on socket 1 00:04:31.738 EAL: Detected lcore 35 as core 27 on socket 1 00:04:31.738 EAL: Detected lcore 36 as core 0 on socket 0 00:04:31.738 EAL: Detected lcore 37 as core 1 on socket 0 00:04:31.738 EAL: Detected lcore 38 as core 2 on socket 0 00:04:31.738 EAL: Detected lcore 39 as core 3 on socket 0 00:04:31.738 EAL: Detected lcore 40 as core 4 on socket 0 00:04:31.738 EAL: Detected lcore 41 as core 8 on socket 0 00:04:31.738 EAL: Detected lcore 42 as core 9 on socket 0 00:04:31.738 EAL: Detected lcore 43 as core 10 on socket 0 00:04:31.738 EAL: Detected lcore 44 as core 11 on socket 0 00:04:31.738 EAL: Detected lcore 45 as core 16 on socket 0 00:04:31.738 EAL: Detected lcore 46 as core 17 on socket 0 00:04:31.738 EAL: Detected lcore 47 as core 18 on socket 0 00:04:31.738 EAL: Detected lcore 48 as core 19 on socket 0 00:04:31.738 EAL: Detected lcore 49 as core 20 on socket 0 00:04:31.739 EAL: Detected lcore 50 as core 24 on socket 0 00:04:31.739 EAL: Detected lcore 51 as core 25 on socket 0 00:04:31.739 EAL: Detected lcore 52 as core 26 on socket 0 00:04:31.739 EAL: Detected lcore 53 as core 27 on socket 0 00:04:31.739 EAL: Detected lcore 54 as core 0 on socket 1 00:04:31.739 EAL: Detected lcore 55 as core 1 on socket 1 00:04:31.739 EAL: Detected lcore 56 as core 2 on socket 1 00:04:31.739 EAL: Detected lcore 57 as core 3 on socket 1 00:04:31.739 EAL: Detected lcore 58 as core 4 on socket 1 00:04:31.739 EAL: Detected lcore 59 as core 8 on socket 1 00:04:31.739 EAL: Detected lcore 60 as core 9 on socket 1 00:04:31.739 EAL: Detected lcore 61 as core 10 on socket 1 00:04:31.739 EAL: Detected lcore 62 as core 11 on socket 1 00:04:31.739 EAL: Detected lcore 63 as core 16 on socket 1 00:04:31.739 EAL: Detected lcore 64 as core 17 on socket 1 00:04:31.739 EAL: Detected lcore 65 as core 18 on socket 1 00:04:31.739 EAL: Detected lcore 66 as core 19 on socket 1 00:04:31.739 EAL: Detected lcore 67 as core 20 on socket 1 00:04:31.739 EAL: Detected lcore 68 as core 24 on socket 1 00:04:31.739 EAL: Detected lcore 69 as core 25 on socket 1 00:04:31.739 EAL: Detected lcore 70 as core 26 on socket 1 00:04:31.739 EAL: Detected lcore 71 as core 27 on socket 1 00:04:31.739 EAL: Maximum logical cores by configuration: 128 00:04:31.739 EAL: Detected CPU lcores: 72 00:04:31.739 EAL: Detected NUMA nodes: 2 00:04:31.739 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:31.739 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:31.739 EAL: Checking presence of .so 'librte_eal.so' 00:04:31.739 EAL: Detected static linkage of DPDK 00:04:31.739 EAL: No shared files mode enabled, IPC will be disabled 00:04:31.739 EAL: Bus pci wants IOVA as 'DC' 00:04:31.739 EAL: Buses did not request a specific IOVA mode. 00:04:31.739 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:31.739 EAL: Selected IOVA mode 'VA' 00:04:31.739 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.739 EAL: Probing VFIO support... 00:04:31.739 EAL: IOMMU type 1 (Type 1) is supported 00:04:31.739 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:31.739 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:31.739 EAL: VFIO support initialized 00:04:31.739 EAL: Ask a virtual area of 0x2e000 bytes 00:04:31.739 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:31.739 EAL: Setting up physically contiguous memory... 00:04:31.739 EAL: Setting maximum number of open files to 524288 00:04:31.739 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:31.739 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:31.739 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:31.739 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:31.739 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.739 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:31.739 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.739 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.739 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:31.739 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:31.739 EAL: Hugepages will be freed exactly as allocated. 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: TSC frequency is ~2300000 KHz 00:04:31.739 EAL: Main lcore 0 is ready (tid=7f205d8cea00;cpuset=[0]) 00:04:31.739 EAL: Trying to obtain current memory policy. 00:04:31.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.739 EAL: Restoring previous memory policy: 0 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was expanded by 2MB 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Mem event callback 'spdk:(nil)' registered 00:04:31.739 00:04:31.739 00:04:31.739 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.739 http://cunit.sourceforge.net/ 00:04:31.739 00:04:31.739 00:04:31.739 Suite: components_suite 00:04:31.739 Test: vtophys_malloc_test ...passed 00:04:31.739 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.739 EAL: Restoring previous memory policy: 4 00:04:31.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.739 EAL: Trying to obtain current memory policy. 00:04:31.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.739 EAL: Restoring previous memory policy: 4 00:04:31.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.739 EAL: Trying to obtain current memory policy. 00:04:31.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.739 EAL: Restoring previous memory policy: 4 00:04:31.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.739 EAL: Trying to obtain current memory policy. 00:04:31.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.739 EAL: Restoring previous memory policy: 4 00:04:31.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.739 EAL: request: mp_malloc_sync 00:04:31.739 EAL: No shared files mode enabled, IPC is disabled 00:04:31.739 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.998 EAL: Trying to obtain current memory policy. 00:04:31.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.998 EAL: Restoring previous memory policy: 4 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.998 EAL: Trying to obtain current memory policy. 00:04:31.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.998 EAL: Restoring previous memory policy: 4 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.998 EAL: Trying to obtain current memory policy. 00:04:31.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.998 EAL: Restoring previous memory policy: 4 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.998 EAL: Trying to obtain current memory policy. 00:04:31.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.998 EAL: Restoring previous memory policy: 4 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.998 EAL: request: mp_malloc_sync 00:04:31.998 EAL: No shared files mode enabled, IPC is disabled 00:04:31.998 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.256 EAL: request: mp_malloc_sync 00:04:32.256 EAL: No shared files mode enabled, IPC is disabled 00:04:32.256 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.256 EAL: Trying to obtain current memory policy. 00:04:32.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.256 EAL: Restoring previous memory policy: 4 00:04:32.256 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.256 EAL: request: mp_malloc_sync 00:04:32.256 EAL: No shared files mode enabled, IPC is disabled 00:04:32.256 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.256 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.515 EAL: request: mp_malloc_sync 00:04:32.515 EAL: No shared files mode enabled, IPC is disabled 00:04:32.515 EAL: Heap on socket 0 was shrunk by 514MB 00:04:32.515 EAL: Trying to obtain current memory policy. 00:04:32.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.515 EAL: Restoring previous memory policy: 4 00:04:32.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.515 EAL: request: mp_malloc_sync 00:04:32.515 EAL: No shared files mode enabled, IPC is disabled 00:04:32.515 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.033 EAL: request: mp_malloc_sync 00:04:33.033 EAL: No shared files mode enabled, IPC is disabled 00:04:33.033 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.033 passed 00:04:33.033 00:04:33.033 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.033 suites 1 1 n/a 0 0 00:04:33.033 tests 2 2 2 0 0 00:04:33.033 asserts 497 497 497 0 n/a 00:04:33.033 00:04:33.033 Elapsed time = 1.114 seconds 00:04:33.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.033 EAL: request: mp_malloc_sync 00:04:33.033 EAL: No shared files mode enabled, IPC is disabled 00:04:33.033 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.033 EAL: No shared files mode enabled, IPC is disabled 00:04:33.033 EAL: No shared files mode enabled, IPC is disabled 00:04:33.033 EAL: No shared files mode enabled, IPC is disabled 00:04:33.033 00:04:33.033 real 0m1.258s 00:04:33.033 user 0m0.716s 00:04:33.033 sys 0m0.512s 00:04:33.033 13:50:10 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.033 13:50:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.033 ************************************ 00:04:33.033 END TEST env_vtophys 00:04:33.033 ************************************ 00:04:33.033 13:50:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.033 13:50:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.033 13:50:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.033 13:50:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.033 13:50:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.033 ************************************ 00:04:33.033 START TEST env_pci 00:04:33.033 ************************************ 00:04:33.033 13:50:11 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.033 00:04:33.033 00:04:33.033 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.033 http://cunit.sourceforge.net/ 00:04:33.033 00:04:33.033 00:04:33.033 Suite: pci 00:04:33.033 Test: pci_hook ...[2024-07-15 13:50:11.044928] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2824186 has claimed it 00:04:33.033 EAL: Cannot find device (10000:00:01.0) 00:04:33.033 EAL: Failed to attach device on primary process 00:04:33.033 passed 00:04:33.033 00:04:33.033 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.033 suites 1 1 n/a 0 0 00:04:33.033 tests 1 1 1 0 0 00:04:33.033 asserts 25 25 25 0 n/a 00:04:33.033 00:04:33.033 Elapsed time = 0.036 seconds 00:04:33.033 00:04:33.033 real 0m0.056s 00:04:33.033 user 0m0.013s 00:04:33.033 sys 0m0.042s 00:04:33.033 13:50:11 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.033 13:50:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.033 ************************************ 00:04:33.033 END TEST env_pci 00:04:33.033 ************************************ 00:04:33.292 13:50:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.292 13:50:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.292 13:50:11 env -- env/env.sh@15 -- # uname 00:04:33.292 13:50:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.292 13:50:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.292 13:50:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.292 13:50:11 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:33.292 13:50:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.292 13:50:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.292 ************************************ 00:04:33.292 START TEST env_dpdk_post_init 00:04:33.292 ************************************ 00:04:33.292 13:50:11 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.292 EAL: Detected CPU lcores: 72 00:04:33.292 EAL: Detected NUMA nodes: 2 00:04:33.292 EAL: Detected static linkage of DPDK 00:04:33.292 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.292 EAL: Selected IOVA mode 'VA' 00:04:33.292 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.292 EAL: VFIO support initialized 00:04:33.292 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.292 EAL: Using IOMMU type 1 (Type 1) 00:04:33.551 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:04:33.810 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1) 00:04:34.069 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1) 00:04:34.069 EAL: Releasing PCI mapped resource for 0000:af:00.0 00:04:34.069 EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001004000 00:04:34.069 EAL: Releasing PCI mapped resource for 0000:b0:00.0 00:04:34.069 EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001008000 00:04:34.328 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:34.328 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:04:34.328 Starting DPDK initialization... 00:04:34.328 Starting SPDK post initialization... 00:04:34.328 SPDK NVMe probe 00:04:34.328 Attaching to 0000:5e:00.0 00:04:34.328 Attaching to 0000:af:00.0 00:04:34.328 Attaching to 0000:b0:00.0 00:04:34.328 Attached to 0000:af:00.0 00:04:34.328 Attached to 0000:b0:00.0 00:04:34.328 Attached to 0000:5e:00.0 00:04:34.328 Cleaning up... 00:04:34.328 00:04:34.328 real 0m1.148s 00:04:34.328 user 0m0.363s 00:04:34.328 sys 0m0.108s 00:04:34.328 13:50:12 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.328 13:50:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.328 ************************************ 00:04:34.328 END TEST env_dpdk_post_init 00:04:34.328 ************************************ 00:04:34.328 13:50:12 env -- common/autotest_common.sh@1142 -- # return 0 00:04:34.328 13:50:12 env -- env/env.sh@26 -- # uname 00:04:34.328 13:50:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:34.328 13:50:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.328 13:50:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.328 13:50:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.328 13:50:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.587 ************************************ 00:04:34.587 START TEST env_mem_callbacks 00:04:34.587 ************************************ 00:04:34.587 13:50:12 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.587 EAL: Detected CPU lcores: 72 00:04:34.587 EAL: Detected NUMA nodes: 2 00:04:34.587 EAL: Detected static linkage of DPDK 00:04:34.587 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.587 EAL: Selected IOVA mode 'VA' 00:04:34.587 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.587 EAL: VFIO support initialized 00:04:34.587 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.587 00:04:34.587 00:04:34.587 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.587 http://cunit.sourceforge.net/ 00:04:34.587 00:04:34.587 00:04:34.587 Suite: memory 00:04:34.587 Test: test ... 00:04:34.587 register 0x200000200000 2097152 00:04:34.587 malloc 3145728 00:04:34.587 register 0x200000400000 4194304 00:04:34.587 buf 0x200000500000 len 3145728 PASSED 00:04:34.587 malloc 64 00:04:34.587 buf 0x2000004fff40 len 64 PASSED 00:04:34.587 malloc 4194304 00:04:34.587 register 0x200000800000 6291456 00:04:34.587 buf 0x200000a00000 len 4194304 PASSED 00:04:34.587 free 0x200000500000 3145728 00:04:34.587 free 0x2000004fff40 64 00:04:34.587 unregister 0x200000400000 4194304 PASSED 00:04:34.587 free 0x200000a00000 4194304 00:04:34.587 unregister 0x200000800000 6291456 PASSED 00:04:34.587 malloc 8388608 00:04:34.587 register 0x200000400000 10485760 00:04:34.587 buf 0x200000600000 len 8388608 PASSED 00:04:34.587 free 0x200000600000 8388608 00:04:34.587 unregister 0x200000400000 10485760 PASSED 00:04:34.587 passed 00:04:34.587 00:04:34.587 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.587 suites 1 1 n/a 0 0 00:04:34.587 tests 1 1 1 0 0 00:04:34.587 asserts 15 15 15 0 n/a 00:04:34.587 00:04:34.587 Elapsed time = 0.008 seconds 00:04:34.587 00:04:34.587 real 0m0.078s 00:04:34.587 user 0m0.025s 00:04:34.587 sys 0m0.051s 00:04:34.587 13:50:12 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.587 13:50:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:34.587 ************************************ 00:04:34.587 END TEST env_mem_callbacks 00:04:34.587 ************************************ 00:04:34.587 13:50:12 env -- common/autotest_common.sh@1142 -- # return 0 00:04:34.587 00:04:34.587 real 0m3.178s 00:04:34.587 user 0m1.408s 00:04:34.587 sys 0m1.103s 00:04:34.587 13:50:12 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.587 13:50:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.587 ************************************ 00:04:34.587 END TEST env 00:04:34.587 ************************************ 00:04:34.587 13:50:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.587 13:50:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.588 13:50:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.588 13:50:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.588 13:50:12 -- common/autotest_common.sh@10 -- # set +x 00:04:34.588 ************************************ 00:04:34.588 START TEST rpc 00:04:34.588 ************************************ 00:04:34.588 13:50:12 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.846 * Looking for test storage... 00:04:34.846 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:34.846 13:50:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2824600 00:04:34.846 13:50:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.846 13:50:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:34.846 13:50:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2824600 00:04:34.846 13:50:12 rpc -- common/autotest_common.sh@829 -- # '[' -z 2824600 ']' 00:04:34.846 13:50:12 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.846 13:50:12 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.846 13:50:12 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.846 13:50:12 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.846 13:50:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.846 [2024-07-15 13:50:12.766065] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:34.846 [2024-07-15 13:50:12.766160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824600 ] 00:04:34.846 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.846 [2024-07-15 13:50:12.852102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.105 [2024-07-15 13:50:12.933875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:35.105 [2024-07-15 13:50:12.933919] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2824600' to capture a snapshot of events at runtime. 00:04:35.105 [2024-07-15 13:50:12.933929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:35.105 [2024-07-15 13:50:12.933937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:35.105 [2024-07-15 13:50:12.933944] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2824600 for offline analysis/debug. 00:04:35.105 [2024-07-15 13:50:12.933975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.671 13:50:13 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.671 13:50:13 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:35.671 13:50:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:35.671 13:50:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:35.671 13:50:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.671 13:50:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.671 13:50:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.671 13:50:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.671 13:50:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.671 ************************************ 00:04:35.671 START TEST rpc_integrity 00:04:35.671 ************************************ 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.671 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.671 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.671 { 00:04:35.671 "name": "Malloc0", 00:04:35.671 "aliases": [ 00:04:35.671 "fc399e32-235c-4c16-8ec3-8de30b671c5a" 00:04:35.671 ], 00:04:35.671 "product_name": "Malloc disk", 00:04:35.671 "block_size": 512, 00:04:35.671 "num_blocks": 16384, 00:04:35.671 "uuid": "fc399e32-235c-4c16-8ec3-8de30b671c5a", 00:04:35.671 "assigned_rate_limits": { 00:04:35.671 "rw_ios_per_sec": 0, 00:04:35.671 "rw_mbytes_per_sec": 0, 00:04:35.671 "r_mbytes_per_sec": 0, 00:04:35.671 "w_mbytes_per_sec": 0 00:04:35.671 }, 00:04:35.671 "claimed": false, 00:04:35.671 "zoned": false, 00:04:35.671 "supported_io_types": { 00:04:35.671 "read": true, 00:04:35.671 "write": true, 00:04:35.671 "unmap": true, 00:04:35.671 "flush": true, 00:04:35.671 "reset": true, 00:04:35.671 "nvme_admin": false, 00:04:35.671 "nvme_io": false, 00:04:35.671 "nvme_io_md": false, 00:04:35.671 "write_zeroes": true, 00:04:35.671 "zcopy": true, 00:04:35.671 "get_zone_info": false, 00:04:35.671 "zone_management": false, 00:04:35.671 "zone_append": false, 00:04:35.671 "compare": false, 00:04:35.671 "compare_and_write": false, 00:04:35.671 "abort": true, 00:04:35.671 "seek_hole": false, 00:04:35.671 "seek_data": false, 00:04:35.671 "copy": true, 00:04:35.671 "nvme_iov_md": false 00:04:35.671 }, 00:04:35.671 "memory_domains": [ 00:04:35.671 { 00:04:35.671 "dma_device_id": "system", 00:04:35.671 "dma_device_type": 1 00:04:35.671 }, 00:04:35.671 { 00:04:35.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.671 "dma_device_type": 2 00:04:35.671 } 00:04:35.671 ], 00:04:35.671 "driver_specific": {} 00:04:35.671 } 00:04:35.671 ]' 00:04:35.946 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.946 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.946 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.946 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.946 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.946 [2024-07-15 13:50:13.790498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.946 [2024-07-15 13:50:13.790536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.946 [2024-07-15 13:50:13.790570] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5e4d0e0 00:04:35.946 [2024-07-15 13:50:13.790579] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.946 [2024-07-15 13:50:13.791482] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.946 [2024-07-15 13:50:13.791504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.946 Passthru0 00:04:35.946 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.947 { 00:04:35.947 "name": "Malloc0", 00:04:35.947 "aliases": [ 00:04:35.947 "fc399e32-235c-4c16-8ec3-8de30b671c5a" 00:04:35.947 ], 00:04:35.947 "product_name": "Malloc disk", 00:04:35.947 "block_size": 512, 00:04:35.947 "num_blocks": 16384, 00:04:35.947 "uuid": "fc399e32-235c-4c16-8ec3-8de30b671c5a", 00:04:35.947 "assigned_rate_limits": { 00:04:35.947 "rw_ios_per_sec": 0, 00:04:35.947 "rw_mbytes_per_sec": 0, 00:04:35.947 "r_mbytes_per_sec": 0, 00:04:35.947 "w_mbytes_per_sec": 0 00:04:35.947 }, 00:04:35.947 "claimed": true, 00:04:35.947 "claim_type": "exclusive_write", 00:04:35.947 "zoned": false, 00:04:35.947 "supported_io_types": { 00:04:35.947 "read": true, 00:04:35.947 "write": true, 00:04:35.947 "unmap": true, 00:04:35.947 "flush": true, 00:04:35.947 "reset": true, 00:04:35.947 "nvme_admin": false, 00:04:35.947 "nvme_io": false, 00:04:35.947 "nvme_io_md": false, 00:04:35.947 "write_zeroes": true, 00:04:35.947 "zcopy": true, 00:04:35.947 "get_zone_info": false, 00:04:35.947 "zone_management": false, 00:04:35.947 "zone_append": false, 00:04:35.947 "compare": false, 00:04:35.947 "compare_and_write": false, 00:04:35.947 "abort": true, 00:04:35.947 "seek_hole": false, 00:04:35.947 "seek_data": false, 00:04:35.947 "copy": true, 00:04:35.947 "nvme_iov_md": false 00:04:35.947 }, 00:04:35.947 "memory_domains": [ 00:04:35.947 { 00:04:35.947 "dma_device_id": "system", 00:04:35.947 "dma_device_type": 1 00:04:35.947 }, 00:04:35.947 { 00:04:35.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.947 "dma_device_type": 2 00:04:35.947 } 00:04:35.947 ], 00:04:35.947 "driver_specific": {} 00:04:35.947 }, 00:04:35.947 { 00:04:35.947 "name": "Passthru0", 00:04:35.947 "aliases": [ 00:04:35.947 "483b2404-ef35-5c3a-9267-6269e0e132b0" 00:04:35.947 ], 00:04:35.947 "product_name": "passthru", 00:04:35.947 "block_size": 512, 00:04:35.947 "num_blocks": 16384, 00:04:35.947 "uuid": "483b2404-ef35-5c3a-9267-6269e0e132b0", 00:04:35.947 "assigned_rate_limits": { 00:04:35.947 "rw_ios_per_sec": 0, 00:04:35.947 "rw_mbytes_per_sec": 0, 00:04:35.947 "r_mbytes_per_sec": 0, 00:04:35.947 "w_mbytes_per_sec": 0 00:04:35.947 }, 00:04:35.947 "claimed": false, 00:04:35.947 "zoned": false, 00:04:35.947 "supported_io_types": { 00:04:35.947 "read": true, 00:04:35.947 "write": true, 00:04:35.947 "unmap": true, 00:04:35.947 "flush": true, 00:04:35.947 "reset": true, 00:04:35.947 "nvme_admin": false, 00:04:35.947 "nvme_io": false, 00:04:35.947 "nvme_io_md": false, 00:04:35.947 "write_zeroes": true, 00:04:35.947 "zcopy": true, 00:04:35.947 "get_zone_info": false, 00:04:35.947 "zone_management": false, 00:04:35.947 "zone_append": false, 00:04:35.947 "compare": false, 00:04:35.947 "compare_and_write": false, 00:04:35.947 "abort": true, 00:04:35.947 "seek_hole": false, 00:04:35.947 "seek_data": false, 00:04:35.947 "copy": true, 00:04:35.947 "nvme_iov_md": false 00:04:35.947 }, 00:04:35.947 "memory_domains": [ 00:04:35.947 { 00:04:35.947 "dma_device_id": "system", 00:04:35.947 "dma_device_type": 1 00:04:35.947 }, 00:04:35.947 { 00:04:35.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.947 "dma_device_type": 2 00:04:35.947 } 00:04:35.947 ], 00:04:35.947 "driver_specific": { 00:04:35.947 "passthru": { 00:04:35.947 "name": "Passthru0", 00:04:35.947 "base_bdev_name": "Malloc0" 00:04:35.947 } 00:04:35.947 } 00:04:35.947 } 00:04:35.947 ]' 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.947 13:50:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.947 00:04:35.947 real 0m0.305s 00:04:35.947 user 0m0.188s 00:04:35.947 sys 0m0.051s 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.947 13:50:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.947 ************************************ 00:04:35.947 END TEST rpc_integrity 00:04:35.947 ************************************ 00:04:35.947 13:50:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.947 13:50:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.947 13:50:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.947 13:50:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.947 13:50:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 START TEST rpc_plugins 00:04:36.307 ************************************ 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:36.307 { 00:04:36.307 "name": "Malloc1", 00:04:36.307 "aliases": [ 00:04:36.307 "3be40808-32f7-4790-b911-572c86dc56e2" 00:04:36.307 ], 00:04:36.307 "product_name": "Malloc disk", 00:04:36.307 "block_size": 4096, 00:04:36.307 "num_blocks": 256, 00:04:36.307 "uuid": "3be40808-32f7-4790-b911-572c86dc56e2", 00:04:36.307 "assigned_rate_limits": { 00:04:36.307 "rw_ios_per_sec": 0, 00:04:36.307 "rw_mbytes_per_sec": 0, 00:04:36.307 "r_mbytes_per_sec": 0, 00:04:36.307 "w_mbytes_per_sec": 0 00:04:36.307 }, 00:04:36.307 "claimed": false, 00:04:36.307 "zoned": false, 00:04:36.307 "supported_io_types": { 00:04:36.307 "read": true, 00:04:36.307 "write": true, 00:04:36.307 "unmap": true, 00:04:36.307 "flush": true, 00:04:36.307 "reset": true, 00:04:36.307 "nvme_admin": false, 00:04:36.307 "nvme_io": false, 00:04:36.307 "nvme_io_md": false, 00:04:36.307 "write_zeroes": true, 00:04:36.307 "zcopy": true, 00:04:36.307 "get_zone_info": false, 00:04:36.307 "zone_management": false, 00:04:36.307 "zone_append": false, 00:04:36.307 "compare": false, 00:04:36.307 "compare_and_write": false, 00:04:36.307 "abort": true, 00:04:36.307 "seek_hole": false, 00:04:36.307 "seek_data": false, 00:04:36.307 "copy": true, 00:04:36.307 "nvme_iov_md": false 00:04:36.307 }, 00:04:36.307 "memory_domains": [ 00:04:36.307 { 00:04:36.307 "dma_device_id": "system", 00:04:36.307 "dma_device_type": 1 00:04:36.307 }, 00:04:36.307 { 00:04:36.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.307 "dma_device_type": 2 00:04:36.307 } 00:04:36.307 ], 00:04:36.307 "driver_specific": {} 00:04:36.307 } 00:04:36.307 ]' 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:36.307 13:50:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:36.307 00:04:36.307 real 0m0.144s 00:04:36.307 user 0m0.084s 00:04:36.307 sys 0m0.026s 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.307 13:50:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 END TEST rpc_plugins 00:04:36.307 ************************************ 00:04:36.307 13:50:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:36.307 13:50:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:36.307 13:50:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.307 13:50:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.307 13:50:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 START TEST rpc_trace_cmd_test 00:04:36.307 ************************************ 00:04:36.307 13:50:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:36.307 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:36.307 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:36.307 13:50:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.307 13:50:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 13:50:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.307 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:36.307 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2824600", 00:04:36.307 "tpoint_group_mask": "0x8", 00:04:36.307 "iscsi_conn": { 00:04:36.307 "mask": "0x2", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.307 }, 00:04:36.307 "scsi": { 00:04:36.307 "mask": "0x4", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.307 }, 00:04:36.307 "bdev": { 00:04:36.307 "mask": "0x8", 00:04:36.307 "tpoint_mask": "0xffffffffffffffff" 00:04:36.307 }, 00:04:36.307 "nvmf_rdma": { 00:04:36.307 "mask": "0x10", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.307 }, 00:04:36.307 "nvmf_tcp": { 00:04:36.307 "mask": "0x20", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.307 }, 00:04:36.307 "ftl": { 00:04:36.307 "mask": "0x40", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.307 }, 00:04:36.307 "blobfs": { 00:04:36.307 "mask": "0x80", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.307 }, 00:04:36.307 "dsa": { 00:04:36.307 "mask": "0x200", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.307 }, 00:04:36.307 "thread": { 00:04:36.307 "mask": "0x400", 00:04:36.307 "tpoint_mask": "0x0" 00:04:36.308 }, 00:04:36.308 "nvme_pcie": { 00:04:36.308 "mask": "0x800", 00:04:36.308 "tpoint_mask": "0x0" 00:04:36.308 }, 00:04:36.308 "iaa": { 00:04:36.308 "mask": "0x1000", 00:04:36.308 "tpoint_mask": "0x0" 00:04:36.308 }, 00:04:36.308 "nvme_tcp": { 00:04:36.308 "mask": "0x2000", 00:04:36.308 "tpoint_mask": "0x0" 00:04:36.308 }, 00:04:36.308 "bdev_nvme": { 00:04:36.308 "mask": "0x4000", 00:04:36.308 "tpoint_mask": "0x0" 00:04:36.308 }, 00:04:36.308 "sock": { 00:04:36.308 "mask": "0x8000", 00:04:36.308 "tpoint_mask": "0x0" 00:04:36.308 } 00:04:36.308 }' 00:04:36.308 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:36.308 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:36.308 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:36.308 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:36.308 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:36.567 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:36.567 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:36.567 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:36.567 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:36.567 13:50:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:36.567 00:04:36.567 real 0m0.244s 00:04:36.567 user 0m0.197s 00:04:36.567 sys 0m0.039s 00:04:36.567 13:50:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.567 13:50:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.567 ************************************ 00:04:36.567 END TEST rpc_trace_cmd_test 00:04:36.567 ************************************ 00:04:36.567 13:50:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:36.567 13:50:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.567 13:50:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.567 13:50:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.567 13:50:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.567 13:50:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.567 13:50:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.567 ************************************ 00:04:36.567 START TEST rpc_daemon_integrity 00:04:36.567 ************************************ 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.567 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.826 { 00:04:36.826 "name": "Malloc2", 00:04:36.826 "aliases": [ 00:04:36.826 "9409f9fe-b1a3-4d67-9f88-79fae8e0f637" 00:04:36.826 ], 00:04:36.826 "product_name": "Malloc disk", 00:04:36.826 "block_size": 512, 00:04:36.826 "num_blocks": 16384, 00:04:36.826 "uuid": "9409f9fe-b1a3-4d67-9f88-79fae8e0f637", 00:04:36.826 "assigned_rate_limits": { 00:04:36.826 "rw_ios_per_sec": 0, 00:04:36.826 "rw_mbytes_per_sec": 0, 00:04:36.826 "r_mbytes_per_sec": 0, 00:04:36.826 "w_mbytes_per_sec": 0 00:04:36.826 }, 00:04:36.826 "claimed": false, 00:04:36.826 "zoned": false, 00:04:36.826 "supported_io_types": { 00:04:36.826 "read": true, 00:04:36.826 "write": true, 00:04:36.826 "unmap": true, 00:04:36.826 "flush": true, 00:04:36.826 "reset": true, 00:04:36.826 "nvme_admin": false, 00:04:36.826 "nvme_io": false, 00:04:36.826 "nvme_io_md": false, 00:04:36.826 "write_zeroes": true, 00:04:36.826 "zcopy": true, 00:04:36.826 "get_zone_info": false, 00:04:36.826 "zone_management": false, 00:04:36.826 "zone_append": false, 00:04:36.826 "compare": false, 00:04:36.826 "compare_and_write": false, 00:04:36.826 "abort": true, 00:04:36.826 "seek_hole": false, 00:04:36.826 "seek_data": false, 00:04:36.826 "copy": true, 00:04:36.826 "nvme_iov_md": false 00:04:36.826 }, 00:04:36.826 "memory_domains": [ 00:04:36.826 { 00:04:36.826 "dma_device_id": "system", 00:04:36.826 "dma_device_type": 1 00:04:36.826 }, 00:04:36.826 { 00:04:36.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.826 "dma_device_type": 2 00:04:36.826 } 00:04:36.826 ], 00:04:36.826 "driver_specific": {} 00:04:36.826 } 00:04:36.826 ]' 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.826 [2024-07-15 13:50:14.720833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.826 [2024-07-15 13:50:14.720866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.826 [2024-07-15 13:50:14.720882] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5e4d310 00:04:36.826 [2024-07-15 13:50:14.720892] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.826 [2024-07-15 13:50:14.721632] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.826 [2024-07-15 13:50:14.721655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.826 Passthru0 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.826 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.826 { 00:04:36.826 "name": "Malloc2", 00:04:36.826 "aliases": [ 00:04:36.826 "9409f9fe-b1a3-4d67-9f88-79fae8e0f637" 00:04:36.826 ], 00:04:36.826 "product_name": "Malloc disk", 00:04:36.826 "block_size": 512, 00:04:36.826 "num_blocks": 16384, 00:04:36.826 "uuid": "9409f9fe-b1a3-4d67-9f88-79fae8e0f637", 00:04:36.826 "assigned_rate_limits": { 00:04:36.826 "rw_ios_per_sec": 0, 00:04:36.826 "rw_mbytes_per_sec": 0, 00:04:36.826 "r_mbytes_per_sec": 0, 00:04:36.826 "w_mbytes_per_sec": 0 00:04:36.826 }, 00:04:36.826 "claimed": true, 00:04:36.826 "claim_type": "exclusive_write", 00:04:36.826 "zoned": false, 00:04:36.826 "supported_io_types": { 00:04:36.827 "read": true, 00:04:36.827 "write": true, 00:04:36.827 "unmap": true, 00:04:36.827 "flush": true, 00:04:36.827 "reset": true, 00:04:36.827 "nvme_admin": false, 00:04:36.827 "nvme_io": false, 00:04:36.827 "nvme_io_md": false, 00:04:36.827 "write_zeroes": true, 00:04:36.827 "zcopy": true, 00:04:36.827 "get_zone_info": false, 00:04:36.827 "zone_management": false, 00:04:36.827 "zone_append": false, 00:04:36.827 "compare": false, 00:04:36.827 "compare_and_write": false, 00:04:36.827 "abort": true, 00:04:36.827 "seek_hole": false, 00:04:36.827 "seek_data": false, 00:04:36.827 "copy": true, 00:04:36.827 "nvme_iov_md": false 00:04:36.827 }, 00:04:36.827 "memory_domains": [ 00:04:36.827 { 00:04:36.827 "dma_device_id": "system", 00:04:36.827 "dma_device_type": 1 00:04:36.827 }, 00:04:36.827 { 00:04:36.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.827 "dma_device_type": 2 00:04:36.827 } 00:04:36.827 ], 00:04:36.827 "driver_specific": {} 00:04:36.827 }, 00:04:36.827 { 00:04:36.827 "name": "Passthru0", 00:04:36.827 "aliases": [ 00:04:36.827 "7e93739c-f04b-5789-aa4a-f684054f611a" 00:04:36.827 ], 00:04:36.827 "product_name": "passthru", 00:04:36.827 "block_size": 512, 00:04:36.827 "num_blocks": 16384, 00:04:36.827 "uuid": "7e93739c-f04b-5789-aa4a-f684054f611a", 00:04:36.827 "assigned_rate_limits": { 00:04:36.827 "rw_ios_per_sec": 0, 00:04:36.827 "rw_mbytes_per_sec": 0, 00:04:36.827 "r_mbytes_per_sec": 0, 00:04:36.827 "w_mbytes_per_sec": 0 00:04:36.827 }, 00:04:36.827 "claimed": false, 00:04:36.827 "zoned": false, 00:04:36.827 "supported_io_types": { 00:04:36.827 "read": true, 00:04:36.827 "write": true, 00:04:36.827 "unmap": true, 00:04:36.827 "flush": true, 00:04:36.827 "reset": true, 00:04:36.827 "nvme_admin": false, 00:04:36.827 "nvme_io": false, 00:04:36.827 "nvme_io_md": false, 00:04:36.827 "write_zeroes": true, 00:04:36.827 "zcopy": true, 00:04:36.827 "get_zone_info": false, 00:04:36.827 "zone_management": false, 00:04:36.827 "zone_append": false, 00:04:36.827 "compare": false, 00:04:36.827 "compare_and_write": false, 00:04:36.827 "abort": true, 00:04:36.827 "seek_hole": false, 00:04:36.827 "seek_data": false, 00:04:36.827 "copy": true, 00:04:36.827 "nvme_iov_md": false 00:04:36.827 }, 00:04:36.827 "memory_domains": [ 00:04:36.827 { 00:04:36.827 "dma_device_id": "system", 00:04:36.827 "dma_device_type": 1 00:04:36.827 }, 00:04:36.827 { 00:04:36.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.827 "dma_device_type": 2 00:04:36.827 } 00:04:36.827 ], 00:04:36.827 "driver_specific": { 00:04:36.827 "passthru": { 00:04:36.827 "name": "Passthru0", 00:04:36.827 "base_bdev_name": "Malloc2" 00:04:36.827 } 00:04:36.827 } 00:04:36.827 } 00:04:36.827 ]' 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.827 00:04:36.827 real 0m0.293s 00:04:36.827 user 0m0.188s 00:04:36.827 sys 0m0.047s 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.827 13:50:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.827 ************************************ 00:04:36.827 END TEST rpc_daemon_integrity 00:04:36.827 ************************************ 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:37.098 13:50:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.098 13:50:14 rpc -- rpc/rpc.sh@84 -- # killprocess 2824600 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@948 -- # '[' -z 2824600 ']' 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@952 -- # kill -0 2824600 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@953 -- # uname 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2824600 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2824600' 00:04:37.098 killing process with pid 2824600 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@967 -- # kill 2824600 00:04:37.098 13:50:14 rpc -- common/autotest_common.sh@972 -- # wait 2824600 00:04:37.357 00:04:37.357 real 0m2.680s 00:04:37.357 user 0m3.388s 00:04:37.357 sys 0m0.864s 00:04:37.357 13:50:15 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.357 13:50:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.357 ************************************ 00:04:37.357 END TEST rpc 00:04:37.357 ************************************ 00:04:37.357 13:50:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.357 13:50:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.357 13:50:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.357 13:50:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.357 13:50:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.357 ************************************ 00:04:37.357 START TEST skip_rpc 00:04:37.357 ************************************ 00:04:37.357 13:50:15 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.616 * Looking for test storage... 00:04:37.616 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:37.616 13:50:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:37.616 13:50:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:37.616 13:50:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:37.616 13:50:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.616 13:50:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.616 13:50:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.616 ************************************ 00:04:37.616 START TEST skip_rpc 00:04:37.616 ************************************ 00:04:37.616 13:50:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:37.616 13:50:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2825161 00:04:37.616 13:50:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:37.616 13:50:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.616 13:50:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.616 [2024-07-15 13:50:15.565950] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:37.616 [2024-07-15 13:50:15.566012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825161 ] 00:04:37.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.616 [2024-07-15 13:50:15.650160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.873 [2024-07-15 13:50:15.733164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2825161 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2825161 ']' 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2825161 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2825161 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2825161' 00:04:43.166 killing process with pid 2825161 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2825161 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2825161 00:04:43.166 00:04:43.166 real 0m5.407s 00:04:43.166 user 0m5.148s 00:04:43.166 sys 0m0.297s 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.166 13:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.166 ************************************ 00:04:43.166 END TEST skip_rpc 00:04:43.166 ************************************ 00:04:43.166 13:50:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.166 13:50:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:43.166 13:50:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.166 13:50:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.166 13:50:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.166 ************************************ 00:04:43.166 START TEST skip_rpc_with_json 00:04:43.166 ************************************ 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2825904 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2825904 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2825904 ']' 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.166 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.166 [2024-07-15 13:50:21.067462] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:43.166 [2024-07-15 13:50:21.067526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825904 ] 00:04:43.166 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.166 [2024-07-15 13:50:21.153788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.424 [2024-07-15 13:50:21.243996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.992 [2024-07-15 13:50:21.904170] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.992 request: 00:04:43.992 { 00:04:43.992 "trtype": "tcp", 00:04:43.992 "method": "nvmf_get_transports", 00:04:43.992 "req_id": 1 00:04:43.992 } 00:04:43.992 Got JSON-RPC error response 00:04:43.992 response: 00:04:43.992 { 00:04:43.992 "code": -19, 00:04:43.992 "message": "No such device" 00:04:43.992 } 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.992 [2024-07-15 13:50:21.916275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.992 13:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.250 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.250 13:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:44.250 { 00:04:44.250 "subsystems": [ 00:04:44.250 { 00:04:44.250 "subsystem": "scheduler", 00:04:44.250 "config": [ 00:04:44.250 { 00:04:44.250 "method": "framework_set_scheduler", 00:04:44.250 "params": { 00:04:44.250 "name": "static" 00:04:44.250 } 00:04:44.250 } 00:04:44.250 ] 00:04:44.250 }, 00:04:44.250 { 00:04:44.250 "subsystem": "vmd", 00:04:44.250 "config": [] 00:04:44.250 }, 00:04:44.250 { 00:04:44.250 "subsystem": "sock", 00:04:44.250 "config": [ 00:04:44.250 { 00:04:44.250 "method": "sock_set_default_impl", 00:04:44.250 "params": { 00:04:44.250 "impl_name": "posix" 00:04:44.250 } 00:04:44.250 }, 00:04:44.250 { 00:04:44.250 "method": "sock_impl_set_options", 00:04:44.250 "params": { 00:04:44.250 "impl_name": "ssl", 00:04:44.250 "recv_buf_size": 4096, 00:04:44.250 "send_buf_size": 4096, 00:04:44.250 "enable_recv_pipe": true, 00:04:44.250 "enable_quickack": false, 00:04:44.250 "enable_placement_id": 0, 00:04:44.250 "enable_zerocopy_send_server": true, 00:04:44.250 "enable_zerocopy_send_client": false, 00:04:44.250 "zerocopy_threshold": 0, 00:04:44.250 "tls_version": 0, 00:04:44.250 "enable_ktls": false 00:04:44.250 } 00:04:44.250 }, 00:04:44.250 { 00:04:44.250 "method": "sock_impl_set_options", 00:04:44.250 "params": { 00:04:44.251 "impl_name": "posix", 00:04:44.251 "recv_buf_size": 2097152, 00:04:44.251 "send_buf_size": 2097152, 00:04:44.251 "enable_recv_pipe": true, 00:04:44.251 "enable_quickack": false, 00:04:44.251 "enable_placement_id": 0, 00:04:44.251 "enable_zerocopy_send_server": true, 00:04:44.251 "enable_zerocopy_send_client": false, 00:04:44.251 "zerocopy_threshold": 0, 00:04:44.251 "tls_version": 0, 00:04:44.251 "enable_ktls": false 00:04:44.251 } 00:04:44.251 } 00:04:44.251 ] 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "subsystem": "iobuf", 00:04:44.251 "config": [ 00:04:44.251 { 00:04:44.251 "method": "iobuf_set_options", 00:04:44.251 "params": { 00:04:44.251 "small_pool_count": 8192, 00:04:44.251 "large_pool_count": 1024, 00:04:44.251 "small_bufsize": 8192, 00:04:44.251 "large_bufsize": 135168 00:04:44.251 } 00:04:44.251 } 00:04:44.251 ] 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "subsystem": "keyring", 00:04:44.251 "config": [] 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "subsystem": "vfio_user_target", 00:04:44.251 "config": null 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "subsystem": "accel", 00:04:44.251 "config": [ 00:04:44.251 { 00:04:44.251 "method": "accel_set_options", 00:04:44.251 "params": { 00:04:44.251 "small_cache_size": 128, 00:04:44.251 "large_cache_size": 16, 00:04:44.251 "task_count": 2048, 00:04:44.251 "sequence_count": 2048, 00:04:44.251 "buf_count": 2048 00:04:44.251 } 00:04:44.251 } 00:04:44.251 ] 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "subsystem": "bdev", 00:04:44.251 "config": [ 00:04:44.251 { 00:04:44.251 "method": "bdev_set_options", 00:04:44.251 "params": { 00:04:44.251 "bdev_io_pool_size": 65535, 00:04:44.251 "bdev_io_cache_size": 256, 00:04:44.251 "bdev_auto_examine": true, 00:04:44.251 "iobuf_small_cache_size": 128, 00:04:44.251 "iobuf_large_cache_size": 16 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "bdev_raid_set_options", 00:04:44.251 "params": { 00:04:44.251 "process_window_size_kb": 1024 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "bdev_nvme_set_options", 00:04:44.251 "params": { 00:04:44.251 "action_on_timeout": "none", 00:04:44.251 "timeout_us": 0, 00:04:44.251 "timeout_admin_us": 0, 00:04:44.251 "keep_alive_timeout_ms": 10000, 00:04:44.251 "arbitration_burst": 0, 00:04:44.251 "low_priority_weight": 0, 00:04:44.251 "medium_priority_weight": 0, 00:04:44.251 "high_priority_weight": 0, 00:04:44.251 "nvme_adminq_poll_period_us": 10000, 00:04:44.251 "nvme_ioq_poll_period_us": 0, 00:04:44.251 "io_queue_requests": 0, 00:04:44.251 "delay_cmd_submit": true, 00:04:44.251 "transport_retry_count": 4, 00:04:44.251 "bdev_retry_count": 3, 00:04:44.251 "transport_ack_timeout": 0, 00:04:44.251 "ctrlr_loss_timeout_sec": 0, 00:04:44.251 "reconnect_delay_sec": 0, 00:04:44.251 "fast_io_fail_timeout_sec": 0, 00:04:44.251 "disable_auto_failback": false, 00:04:44.251 "generate_uuids": false, 00:04:44.251 "transport_tos": 0, 00:04:44.251 "nvme_error_stat": false, 00:04:44.251 "rdma_srq_size": 0, 00:04:44.251 "io_path_stat": false, 00:04:44.251 "allow_accel_sequence": false, 00:04:44.251 "rdma_max_cq_size": 0, 00:04:44.251 "rdma_cm_event_timeout_ms": 0, 00:04:44.251 "dhchap_digests": [ 00:04:44.251 "sha256", 00:04:44.251 "sha384", 00:04:44.251 "sha512" 00:04:44.251 ], 00:04:44.251 "dhchap_dhgroups": [ 00:04:44.251 "null", 00:04:44.251 "ffdhe2048", 00:04:44.251 "ffdhe3072", 00:04:44.251 "ffdhe4096", 00:04:44.251 "ffdhe6144", 00:04:44.251 "ffdhe8192" 00:04:44.251 ] 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "bdev_nvme_set_hotplug", 00:04:44.251 "params": { 00:04:44.251 "period_us": 100000, 00:04:44.251 "enable": false 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "bdev_iscsi_set_options", 00:04:44.251 "params": { 00:04:44.251 "timeout_sec": 30 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "bdev_wait_for_examine" 00:04:44.251 } 00:04:44.251 ] 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "subsystem": "nvmf", 00:04:44.251 "config": [ 00:04:44.251 { 00:04:44.251 "method": "nvmf_set_config", 00:04:44.251 "params": { 00:04:44.251 "discovery_filter": "match_any", 00:04:44.251 "admin_cmd_passthru": { 00:04:44.251 "identify_ctrlr": false 00:04:44.251 } 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "nvmf_set_max_subsystems", 00:04:44.251 "params": { 00:04:44.251 "max_subsystems": 1024 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "nvmf_set_crdt", 00:04:44.251 "params": { 00:04:44.251 "crdt1": 0, 00:04:44.251 "crdt2": 0, 00:04:44.251 "crdt3": 0 00:04:44.251 } 00:04:44.251 }, 00:04:44.251 { 00:04:44.251 "method": "nvmf_create_transport", 00:04:44.251 "params": { 00:04:44.251 "trtype": "TCP", 00:04:44.251 "max_queue_depth": 128, 00:04:44.251 "max_io_qpairs_per_ctrlr": 127, 00:04:44.252 "in_capsule_data_size": 4096, 00:04:44.252 "max_io_size": 131072, 00:04:44.252 "io_unit_size": 131072, 00:04:44.252 "max_aq_depth": 128, 00:04:44.252 "num_shared_buffers": 511, 00:04:44.252 "buf_cache_size": 4294967295, 00:04:44.252 "dif_insert_or_strip": false, 00:04:44.252 "zcopy": false, 00:04:44.252 "c2h_success": true, 00:04:44.252 "sock_priority": 0, 00:04:44.252 "abort_timeout_sec": 1, 00:04:44.252 "ack_timeout": 0, 00:04:44.252 "data_wr_pool_size": 0 00:04:44.252 } 00:04:44.252 } 00:04:44.252 ] 00:04:44.252 }, 00:04:44.252 { 00:04:44.252 "subsystem": "nbd", 00:04:44.252 "config": [] 00:04:44.252 }, 00:04:44.252 { 00:04:44.252 "subsystem": "ublk", 00:04:44.252 "config": [] 00:04:44.252 }, 00:04:44.252 { 00:04:44.252 "subsystem": "vhost_blk", 00:04:44.252 "config": [] 00:04:44.252 }, 00:04:44.252 { 00:04:44.252 "subsystem": "scsi", 00:04:44.252 "config": null 00:04:44.252 }, 00:04:44.252 { 00:04:44.252 "subsystem": "iscsi", 00:04:44.252 "config": [ 00:04:44.252 { 00:04:44.252 "method": "iscsi_set_options", 00:04:44.252 "params": { 00:04:44.252 "node_base": "iqn.2016-06.io.spdk", 00:04:44.252 "max_sessions": 128, 00:04:44.252 "max_connections_per_session": 2, 00:04:44.252 "max_queue_depth": 64, 00:04:44.252 "default_time2wait": 2, 00:04:44.252 "default_time2retain": 20, 00:04:44.252 "first_burst_length": 8192, 00:04:44.252 "immediate_data": true, 00:04:44.252 "allow_duplicated_isid": false, 00:04:44.252 "error_recovery_level": 0, 00:04:44.252 "nop_timeout": 60, 00:04:44.252 "nop_in_interval": 30, 00:04:44.252 "disable_chap": false, 00:04:44.252 "require_chap": false, 00:04:44.252 "mutual_chap": false, 00:04:44.252 "chap_group": 0, 00:04:44.252 "max_large_datain_per_connection": 64, 00:04:44.252 "max_r2t_per_connection": 4, 00:04:44.252 "pdu_pool_size": 36864, 00:04:44.252 "immediate_data_pool_size": 16384, 00:04:44.252 "data_out_pool_size": 2048 00:04:44.252 } 00:04:44.252 } 00:04:44.252 ] 00:04:44.252 }, 00:04:44.252 { 00:04:44.252 "subsystem": "vhost_scsi", 00:04:44.252 "config": [] 00:04:44.252 } 00:04:44.252 ] 00:04:44.252 } 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2825904 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2825904 ']' 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2825904 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2825904 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2825904' 00:04:44.252 killing process with pid 2825904 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2825904 00:04:44.252 13:50:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2825904 00:04:44.511 13:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2826119 00:04:44.511 13:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:44.511 13:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2826119 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2826119 ']' 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2826119 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2826119 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2826119' 00:04:49.773 killing process with pid 2826119 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2826119 00:04:49.773 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2826119 00:04:50.032 13:50:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:50.032 13:50:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:50.032 00:04:50.032 real 0m6.848s 00:04:50.032 user 0m6.600s 00:04:50.032 sys 0m0.705s 00:04:50.032 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.033 ************************************ 00:04:50.033 END TEST skip_rpc_with_json 00:04:50.033 ************************************ 00:04:50.033 13:50:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.033 13:50:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.033 13:50:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.033 13:50:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.033 13:50:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.033 ************************************ 00:04:50.033 START TEST skip_rpc_with_delay 00:04:50.033 ************************************ 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.033 13:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.033 [2024-07-15 13:50:28.001110] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.033 [2024-07-15 13:50:28.001269] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:50.033 13:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:50.033 13:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.033 13:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.033 13:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.033 00:04:50.033 real 0m0.046s 00:04:50.033 user 0m0.022s 00:04:50.033 sys 0m0.024s 00:04:50.033 13:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.033 13:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:50.033 ************************************ 00:04:50.033 END TEST skip_rpc_with_delay 00:04:50.033 ************************************ 00:04:50.033 13:50:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.033 13:50:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.033 13:50:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.033 13:50:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.033 13:50:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.033 13:50:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.033 13:50:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.291 ************************************ 00:04:50.291 START TEST exit_on_failed_rpc_init 00:04:50.291 ************************************ 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2826926 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2826926 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2826926 ']' 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.291 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.291 [2024-07-15 13:50:28.132019] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:50.291 [2024-07-15 13:50:28.132103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826926 ] 00:04:50.291 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.291 [2024-07-15 13:50:28.218060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.291 [2024-07-15 13:50:28.308947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.224 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.225 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.225 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.225 13:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.225 [2024-07-15 13:50:28.999385] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:51.225 [2024-07-15 13:50:28.999477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826946 ] 00:04:51.225 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.225 [2024-07-15 13:50:29.082873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.225 [2024-07-15 13:50:29.165858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.225 [2024-07-15 13:50:29.165966] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.225 [2024-07-15 13:50:29.165979] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.225 [2024-07-15 13:50:29.165987] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2826926 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2826926 ']' 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2826926 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2826926 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2826926' 00:04:51.225 killing process with pid 2826926 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2826926 00:04:51.225 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2826926 00:04:51.793 00:04:51.793 real 0m1.527s 00:04:51.793 user 0m1.713s 00:04:51.793 sys 0m0.472s 00:04:51.793 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.793 13:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.793 ************************************ 00:04:51.793 END TEST exit_on_failed_rpc_init 00:04:51.793 ************************************ 00:04:51.793 13:50:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.793 13:50:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:51.793 00:04:51.793 real 0m14.282s 00:04:51.793 user 0m13.655s 00:04:51.793 sys 0m1.816s 00:04:51.793 13:50:29 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.793 13:50:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.793 ************************************ 00:04:51.793 END TEST skip_rpc 00:04:51.793 ************************************ 00:04:51.793 13:50:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.793 13:50:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.793 13:50:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.793 13:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.793 13:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:51.793 ************************************ 00:04:51.793 START TEST rpc_client 00:04:51.793 ************************************ 00:04:51.793 13:50:29 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.793 * Looking for test storage... 00:04:52.052 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:52.052 13:50:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:52.052 OK 00:04:52.052 13:50:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:52.052 00:04:52.052 real 0m0.135s 00:04:52.052 user 0m0.053s 00:04:52.052 sys 0m0.093s 00:04:52.052 13:50:29 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.052 13:50:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:52.052 ************************************ 00:04:52.052 END TEST rpc_client 00:04:52.052 ************************************ 00:04:52.052 13:50:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.052 13:50:29 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:52.052 13:50:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.052 13:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.052 13:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:52.052 ************************************ 00:04:52.052 START TEST json_config 00:04:52.052 ************************************ 00:04:52.052 13:50:29 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:52.052 13:50:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.052 13:50:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:52.052 13:50:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.052 13:50:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.052 13:50:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.052 13:50:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.052 13:50:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.053 13:50:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.053 13:50:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:52.053 13:50:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@47 -- # : 0 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:52.053 13:50:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:52.053 13:50:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:52.053 13:50:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:52.053 13:50:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:52.053 13:50:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:52.053 13:50:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:52.053 13:50:30 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:52.053 WARNING: No tests are enabled so not running JSON configuration tests 00:04:52.053 13:50:30 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:52.053 00:04:52.053 real 0m0.110s 00:04:52.053 user 0m0.052s 00:04:52.053 sys 0m0.060s 00:04:52.053 13:50:30 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.053 13:50:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.053 ************************************ 00:04:52.053 END TEST json_config 00:04:52.053 ************************************ 00:04:52.312 13:50:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.312 13:50:30 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:52.312 13:50:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.312 13:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.312 13:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:52.312 ************************************ 00:04:52.312 START TEST json_config_extra_key 00:04:52.312 ************************************ 00:04:52.312 13:50:30 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:52.312 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:52.312 13:50:30 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.312 13:50:30 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.312 13:50:30 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.312 13:50:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.312 13:50:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.312 13:50:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.312 13:50:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:52.312 13:50:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:52.312 13:50:30 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:52.312 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:52.312 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:52.312 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:52.312 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:52.313 INFO: launching applications... 00:04:52.313 13:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2827273 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.313 Waiting for target to run... 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2827273 /var/tmp/spdk_tgt.sock 00:04:52.313 13:50:30 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2827273 ']' 00:04:52.313 13:50:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:52.313 13:50:30 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.313 13:50:30 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.313 13:50:30 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.313 13:50:30 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.313 13:50:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.313 [2024-07-15 13:50:30.320795] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:52.313 [2024-07-15 13:50:30.320892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827273 ] 00:04:52.313 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.571 [2024-07-15 13:50:30.626620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.831 [2024-07-15 13:50:30.698797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.103 13:50:31 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.103 13:50:31 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.103 00:04:53.103 13:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.103 INFO: shutting down applications... 00:04:53.103 13:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2827273 ]] 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2827273 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2827273 00:04:53.103 13:50:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.670 13:50:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.670 13:50:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.670 13:50:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2827273 00:04:53.670 13:50:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.670 13:50:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.670 13:50:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.670 13:50:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.670 SPDK target shutdown done 00:04:53.670 13:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.670 Success 00:04:53.670 00:04:53.670 real 0m1.471s 00:04:53.670 user 0m1.247s 00:04:53.670 sys 0m0.422s 00:04:53.670 13:50:31 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.670 13:50:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.670 ************************************ 00:04:53.670 END TEST json_config_extra_key 00:04:53.670 ************************************ 00:04:53.670 13:50:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.670 13:50:31 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.670 13:50:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.670 13:50:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.670 13:50:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.670 ************************************ 00:04:53.670 START TEST alias_rpc 00:04:53.670 ************************************ 00:04:53.670 13:50:31 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.929 * Looking for test storage... 00:04:53.929 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:53.929 13:50:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.929 13:50:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2827506 00:04:53.929 13:50:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2827506 00:04:53.929 13:50:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.929 13:50:31 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2827506 ']' 00:04:53.929 13:50:31 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.929 13:50:31 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.929 13:50:31 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.929 13:50:31 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.929 13:50:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.929 [2024-07-15 13:50:31.876785] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:53.929 [2024-07-15 13:50:31.876857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827506 ] 00:04:53.929 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.929 [2024-07-15 13:50:31.946273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.188 [2024-07-15 13:50:32.028945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.755 13:50:32 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.755 13:50:32 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:54.755 13:50:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:55.014 13:50:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2827506 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2827506 ']' 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2827506 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2827506 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2827506' 00:04:55.014 killing process with pid 2827506 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@967 -- # kill 2827506 00:04:55.014 13:50:32 alias_rpc -- common/autotest_common.sh@972 -- # wait 2827506 00:04:55.273 00:04:55.273 real 0m1.535s 00:04:55.273 user 0m1.642s 00:04:55.273 sys 0m0.459s 00:04:55.273 13:50:33 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.273 13:50:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.273 ************************************ 00:04:55.273 END TEST alias_rpc 00:04:55.273 ************************************ 00:04:55.273 13:50:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.273 13:50:33 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:55.273 13:50:33 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.273 13:50:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.273 13:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.273 13:50:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.532 ************************************ 00:04:55.532 START TEST spdkcli_tcp 00:04:55.532 ************************************ 00:04:55.532 13:50:33 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.532 * Looking for test storage... 00:04:55.532 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:55.532 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:55.532 13:50:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:55.532 13:50:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:55.532 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:55.532 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:55.532 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:55.532 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.533 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2827851 00:04:55.533 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2827851 00:04:55.533 13:50:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2827851 ']' 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.533 13:50:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.533 [2024-07-15 13:50:33.504424] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:55.533 [2024-07-15 13:50:33.504506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827851 ] 00:04:55.533 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.533 [2024-07-15 13:50:33.591728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.792 [2024-07-15 13:50:33.683739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.792 [2024-07-15 13:50:33.683739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.359 13:50:34 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.359 13:50:34 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:56.360 13:50:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2827929 00:04:56.360 13:50:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.360 13:50:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.618 [ 00:04:56.618 "spdk_get_version", 00:04:56.618 "rpc_get_methods", 00:04:56.618 "trace_get_info", 00:04:56.618 "trace_get_tpoint_group_mask", 00:04:56.618 "trace_disable_tpoint_group", 00:04:56.618 "trace_enable_tpoint_group", 00:04:56.618 "trace_clear_tpoint_mask", 00:04:56.618 "trace_set_tpoint_mask", 00:04:56.618 "vfu_tgt_set_base_path", 00:04:56.618 "framework_get_pci_devices", 00:04:56.618 "framework_get_config", 00:04:56.618 "framework_get_subsystems", 00:04:56.618 "keyring_get_keys", 00:04:56.618 "iobuf_get_stats", 00:04:56.618 "iobuf_set_options", 00:04:56.618 "sock_get_default_impl", 00:04:56.618 "sock_set_default_impl", 00:04:56.618 "sock_impl_set_options", 00:04:56.618 "sock_impl_get_options", 00:04:56.618 "vmd_rescan", 00:04:56.618 "vmd_remove_device", 00:04:56.618 "vmd_enable", 00:04:56.618 "accel_get_stats", 00:04:56.618 "accel_set_options", 00:04:56.618 "accel_set_driver", 00:04:56.618 "accel_crypto_key_destroy", 00:04:56.618 "accel_crypto_keys_get", 00:04:56.618 "accel_crypto_key_create", 00:04:56.618 "accel_assign_opc", 00:04:56.618 "accel_get_module_info", 00:04:56.618 "accel_get_opc_assignments", 00:04:56.618 "notify_get_notifications", 00:04:56.618 "notify_get_types", 00:04:56.618 "bdev_get_histogram", 00:04:56.618 "bdev_enable_histogram", 00:04:56.618 "bdev_set_qos_limit", 00:04:56.618 "bdev_set_qd_sampling_period", 00:04:56.618 "bdev_get_bdevs", 00:04:56.618 "bdev_reset_iostat", 00:04:56.618 "bdev_get_iostat", 00:04:56.618 "bdev_examine", 00:04:56.618 "bdev_wait_for_examine", 00:04:56.618 "bdev_set_options", 00:04:56.618 "scsi_get_devices", 00:04:56.618 "thread_set_cpumask", 00:04:56.618 "framework_get_governor", 00:04:56.618 "framework_get_scheduler", 00:04:56.618 "framework_set_scheduler", 00:04:56.618 "framework_get_reactors", 00:04:56.618 "thread_get_io_channels", 00:04:56.618 "thread_get_pollers", 00:04:56.618 "thread_get_stats", 00:04:56.618 "framework_monitor_context_switch", 00:04:56.618 "spdk_kill_instance", 00:04:56.618 "log_enable_timestamps", 00:04:56.618 "log_get_flags", 00:04:56.618 "log_clear_flag", 00:04:56.618 "log_set_flag", 00:04:56.618 "log_get_level", 00:04:56.618 "log_set_level", 00:04:56.618 "log_get_print_level", 00:04:56.618 "log_set_print_level", 00:04:56.618 "framework_enable_cpumask_locks", 00:04:56.618 "framework_disable_cpumask_locks", 00:04:56.618 "framework_wait_init", 00:04:56.619 "framework_start_init", 00:04:56.619 "virtio_blk_create_transport", 00:04:56.619 "virtio_blk_get_transports", 00:04:56.619 "vhost_controller_set_coalescing", 00:04:56.619 "vhost_get_controllers", 00:04:56.619 "vhost_delete_controller", 00:04:56.619 "vhost_create_blk_controller", 00:04:56.619 "vhost_scsi_controller_remove_target", 00:04:56.619 "vhost_scsi_controller_add_target", 00:04:56.619 "vhost_start_scsi_controller", 00:04:56.619 "vhost_create_scsi_controller", 00:04:56.619 "ublk_recover_disk", 00:04:56.619 "ublk_get_disks", 00:04:56.619 "ublk_stop_disk", 00:04:56.619 "ublk_start_disk", 00:04:56.619 "ublk_destroy_target", 00:04:56.619 "ublk_create_target", 00:04:56.619 "nbd_get_disks", 00:04:56.619 "nbd_stop_disk", 00:04:56.619 "nbd_start_disk", 00:04:56.619 "env_dpdk_get_mem_stats", 00:04:56.619 "nvmf_stop_mdns_prr", 00:04:56.619 "nvmf_publish_mdns_prr", 00:04:56.619 "nvmf_subsystem_get_listeners", 00:04:56.619 "nvmf_subsystem_get_qpairs", 00:04:56.619 "nvmf_subsystem_get_controllers", 00:04:56.619 "nvmf_get_stats", 00:04:56.619 "nvmf_get_transports", 00:04:56.619 "nvmf_create_transport", 00:04:56.619 "nvmf_get_targets", 00:04:56.619 "nvmf_delete_target", 00:04:56.619 "nvmf_create_target", 00:04:56.619 "nvmf_subsystem_allow_any_host", 00:04:56.619 "nvmf_subsystem_remove_host", 00:04:56.619 "nvmf_subsystem_add_host", 00:04:56.619 "nvmf_ns_remove_host", 00:04:56.619 "nvmf_ns_add_host", 00:04:56.619 "nvmf_subsystem_remove_ns", 00:04:56.619 "nvmf_subsystem_add_ns", 00:04:56.619 "nvmf_subsystem_listener_set_ana_state", 00:04:56.619 "nvmf_discovery_get_referrals", 00:04:56.619 "nvmf_discovery_remove_referral", 00:04:56.619 "nvmf_discovery_add_referral", 00:04:56.619 "nvmf_subsystem_remove_listener", 00:04:56.619 "nvmf_subsystem_add_listener", 00:04:56.619 "nvmf_delete_subsystem", 00:04:56.619 "nvmf_create_subsystem", 00:04:56.619 "nvmf_get_subsystems", 00:04:56.619 "nvmf_set_crdt", 00:04:56.619 "nvmf_set_config", 00:04:56.619 "nvmf_set_max_subsystems", 00:04:56.619 "iscsi_get_histogram", 00:04:56.619 "iscsi_enable_histogram", 00:04:56.619 "iscsi_set_options", 00:04:56.619 "iscsi_get_auth_groups", 00:04:56.619 "iscsi_auth_group_remove_secret", 00:04:56.619 "iscsi_auth_group_add_secret", 00:04:56.619 "iscsi_delete_auth_group", 00:04:56.619 "iscsi_create_auth_group", 00:04:56.619 "iscsi_set_discovery_auth", 00:04:56.619 "iscsi_get_options", 00:04:56.619 "iscsi_target_node_request_logout", 00:04:56.619 "iscsi_target_node_set_redirect", 00:04:56.619 "iscsi_target_node_set_auth", 00:04:56.619 "iscsi_target_node_add_lun", 00:04:56.619 "iscsi_get_stats", 00:04:56.619 "iscsi_get_connections", 00:04:56.619 "iscsi_portal_group_set_auth", 00:04:56.619 "iscsi_start_portal_group", 00:04:56.619 "iscsi_delete_portal_group", 00:04:56.619 "iscsi_create_portal_group", 00:04:56.619 "iscsi_get_portal_groups", 00:04:56.619 "iscsi_delete_target_node", 00:04:56.619 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.619 "iscsi_target_node_add_pg_ig_maps", 00:04:56.619 "iscsi_create_target_node", 00:04:56.619 "iscsi_get_target_nodes", 00:04:56.619 "iscsi_delete_initiator_group", 00:04:56.619 "iscsi_initiator_group_remove_initiators", 00:04:56.619 "iscsi_initiator_group_add_initiators", 00:04:56.619 "iscsi_create_initiator_group", 00:04:56.619 "iscsi_get_initiator_groups", 00:04:56.619 "keyring_linux_set_options", 00:04:56.619 "keyring_file_remove_key", 00:04:56.619 "keyring_file_add_key", 00:04:56.619 "vfu_virtio_create_scsi_endpoint", 00:04:56.619 "vfu_virtio_scsi_remove_target", 00:04:56.619 "vfu_virtio_scsi_add_target", 00:04:56.619 "vfu_virtio_create_blk_endpoint", 00:04:56.619 "vfu_virtio_delete_endpoint", 00:04:56.619 "iaa_scan_accel_module", 00:04:56.619 "dsa_scan_accel_module", 00:04:56.619 "ioat_scan_accel_module", 00:04:56.619 "accel_error_inject_error", 00:04:56.619 "bdev_iscsi_delete", 00:04:56.619 "bdev_iscsi_create", 00:04:56.619 "bdev_iscsi_set_options", 00:04:56.619 "bdev_virtio_attach_controller", 00:04:56.619 "bdev_virtio_scsi_get_devices", 00:04:56.619 "bdev_virtio_detach_controller", 00:04:56.619 "bdev_virtio_blk_set_hotplug", 00:04:56.619 "bdev_ftl_set_property", 00:04:56.619 "bdev_ftl_get_properties", 00:04:56.619 "bdev_ftl_get_stats", 00:04:56.619 "bdev_ftl_unmap", 00:04:56.619 "bdev_ftl_unload", 00:04:56.619 "bdev_ftl_delete", 00:04:56.619 "bdev_ftl_load", 00:04:56.619 "bdev_ftl_create", 00:04:56.619 "bdev_aio_delete", 00:04:56.619 "bdev_aio_rescan", 00:04:56.619 "bdev_aio_create", 00:04:56.619 "blobfs_create", 00:04:56.619 "blobfs_detect", 00:04:56.619 "blobfs_set_cache_size", 00:04:56.619 "bdev_zone_block_delete", 00:04:56.619 "bdev_zone_block_create", 00:04:56.619 "bdev_delay_delete", 00:04:56.619 "bdev_delay_create", 00:04:56.619 "bdev_delay_update_latency", 00:04:56.619 "bdev_split_delete", 00:04:56.619 "bdev_split_create", 00:04:56.619 "bdev_error_inject_error", 00:04:56.619 "bdev_error_delete", 00:04:56.619 "bdev_error_create", 00:04:56.619 "bdev_raid_set_options", 00:04:56.619 "bdev_raid_remove_base_bdev", 00:04:56.619 "bdev_raid_add_base_bdev", 00:04:56.619 "bdev_raid_delete", 00:04:56.619 "bdev_raid_create", 00:04:56.619 "bdev_raid_get_bdevs", 00:04:56.619 "bdev_lvol_set_parent_bdev", 00:04:56.619 "bdev_lvol_set_parent", 00:04:56.619 "bdev_lvol_check_shallow_copy", 00:04:56.619 "bdev_lvol_start_shallow_copy", 00:04:56.619 "bdev_lvol_grow_lvstore", 00:04:56.619 "bdev_lvol_get_lvols", 00:04:56.619 "bdev_lvol_get_lvstores", 00:04:56.619 "bdev_lvol_delete", 00:04:56.619 "bdev_lvol_set_read_only", 00:04:56.619 "bdev_lvol_resize", 00:04:56.619 "bdev_lvol_decouple_parent", 00:04:56.619 "bdev_lvol_inflate", 00:04:56.619 "bdev_lvol_rename", 00:04:56.619 "bdev_lvol_clone_bdev", 00:04:56.619 "bdev_lvol_clone", 00:04:56.619 "bdev_lvol_snapshot", 00:04:56.619 "bdev_lvol_create", 00:04:56.619 "bdev_lvol_delete_lvstore", 00:04:56.619 "bdev_lvol_rename_lvstore", 00:04:56.619 "bdev_lvol_create_lvstore", 00:04:56.619 "bdev_passthru_delete", 00:04:56.619 "bdev_passthru_create", 00:04:56.619 "bdev_nvme_cuse_unregister", 00:04:56.619 "bdev_nvme_cuse_register", 00:04:56.619 "bdev_opal_new_user", 00:04:56.619 "bdev_opal_set_lock_state", 00:04:56.619 "bdev_opal_delete", 00:04:56.619 "bdev_opal_get_info", 00:04:56.619 "bdev_opal_create", 00:04:56.619 "bdev_nvme_opal_revert", 00:04:56.619 "bdev_nvme_opal_init", 00:04:56.619 "bdev_nvme_send_cmd", 00:04:56.619 "bdev_nvme_get_path_iostat", 00:04:56.619 "bdev_nvme_get_mdns_discovery_info", 00:04:56.619 "bdev_nvme_stop_mdns_discovery", 00:04:56.619 "bdev_nvme_start_mdns_discovery", 00:04:56.619 "bdev_nvme_set_multipath_policy", 00:04:56.619 "bdev_nvme_set_preferred_path", 00:04:56.619 "bdev_nvme_get_io_paths", 00:04:56.619 "bdev_nvme_remove_error_injection", 00:04:56.619 "bdev_nvme_add_error_injection", 00:04:56.619 "bdev_nvme_get_discovery_info", 00:04:56.619 "bdev_nvme_stop_discovery", 00:04:56.619 "bdev_nvme_start_discovery", 00:04:56.619 "bdev_nvme_get_controller_health_info", 00:04:56.619 "bdev_nvme_disable_controller", 00:04:56.619 "bdev_nvme_enable_controller", 00:04:56.619 "bdev_nvme_reset_controller", 00:04:56.619 "bdev_nvme_get_transport_statistics", 00:04:56.619 "bdev_nvme_apply_firmware", 00:04:56.619 "bdev_nvme_detach_controller", 00:04:56.619 "bdev_nvme_get_controllers", 00:04:56.619 "bdev_nvme_attach_controller", 00:04:56.619 "bdev_nvme_set_hotplug", 00:04:56.619 "bdev_nvme_set_options", 00:04:56.619 "bdev_null_resize", 00:04:56.619 "bdev_null_delete", 00:04:56.619 "bdev_null_create", 00:04:56.619 "bdev_malloc_delete", 00:04:56.619 "bdev_malloc_create" 00:04:56.619 ] 00:04:56.619 13:50:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.619 13:50:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.619 13:50:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2827851 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2827851 ']' 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2827851 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2827851 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2827851' 00:04:56.619 killing process with pid 2827851 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2827851 00:04:56.619 13:50:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2827851 00:04:56.886 00:04:56.886 real 0m1.595s 00:04:56.886 user 0m2.867s 00:04:56.886 sys 0m0.533s 00:04:56.886 13:50:34 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.886 13:50:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.886 ************************************ 00:04:56.886 END TEST spdkcli_tcp 00:04:56.886 ************************************ 00:04:57.145 13:50:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.145 13:50:34 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.145 13:50:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.145 13:50:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.146 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.146 ************************************ 00:04:57.146 START TEST dpdk_mem_utility 00:04:57.146 ************************************ 00:04:57.146 13:50:35 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.146 * Looking for test storage... 00:04:57.146 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:57.146 13:50:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:57.146 13:50:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2828166 00:04:57.146 13:50:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2828166 00:04:57.146 13:50:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.146 13:50:35 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2828166 ']' 00:04:57.146 13:50:35 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.146 13:50:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.146 13:50:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.146 13:50:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.146 13:50:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.146 [2024-07-15 13:50:35.168271] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:57.146 [2024-07-15 13:50:35.168369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828166 ] 00:04:57.146 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.404 [2024-07-15 13:50:35.254649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.404 [2024-07-15 13:50:35.334458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.971 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.971 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:57.971 13:50:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.971 13:50:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.971 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.971 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.971 { 00:04:57.971 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.971 } 00:04:57.971 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.971 13:50:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:58.231 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:58.231 1 heaps totaling size 814.000000 MiB 00:04:58.231 size: 814.000000 MiB heap id: 0 00:04:58.231 end heaps---------- 00:04:58.231 8 mempools totaling size 598.116089 MiB 00:04:58.231 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.231 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.231 size: 84.521057 MiB name: bdev_io_2828166 00:04:58.231 size: 51.011292 MiB name: evtpool_2828166 00:04:58.231 size: 50.003479 MiB name: msgpool_2828166 00:04:58.231 size: 21.763794 MiB name: PDU_Pool 00:04:58.231 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.231 size: 0.026123 MiB name: Session_Pool 00:04:58.231 end mempools------- 00:04:58.231 6 memzones totaling size 4.142822 MiB 00:04:58.231 size: 1.000366 MiB name: RG_ring_0_2828166 00:04:58.231 size: 1.000366 MiB name: RG_ring_1_2828166 00:04:58.231 size: 1.000366 MiB name: RG_ring_4_2828166 00:04:58.231 size: 1.000366 MiB name: RG_ring_5_2828166 00:04:58.231 size: 0.125366 MiB name: RG_ring_2_2828166 00:04:58.231 size: 0.015991 MiB name: RG_ring_3_2828166 00:04:58.231 end memzones------- 00:04:58.231 13:50:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.231 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:58.231 list of free elements. size: 12.519348 MiB 00:04:58.231 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:58.231 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:58.231 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:58.231 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:58.231 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:58.231 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:58.231 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:58.231 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:58.231 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:58.231 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:58.231 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:58.231 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:58.231 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:58.231 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:58.231 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:58.231 list of standard malloc elements. size: 199.218079 MiB 00:04:58.231 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:58.231 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:58.231 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:58.231 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:58.231 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:58.231 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:58.231 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:58.231 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:58.231 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:58.231 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:58.231 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:58.231 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:58.231 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:58.231 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:58.231 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:58.231 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:58.231 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:58.231 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:58.231 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:58.231 list of memzone associated elements. size: 602.262573 MiB 00:04:58.231 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:58.232 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.232 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:58.232 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.232 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:58.232 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2828166_0 00:04:58.232 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:58.232 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2828166_0 00:04:58.232 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:58.232 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2828166_0 00:04:58.232 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:58.232 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.232 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:58.232 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.232 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:58.232 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2828166 00:04:58.232 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:58.232 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2828166 00:04:58.232 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:58.232 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2828166 00:04:58.232 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:58.232 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.232 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:58.232 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.232 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:58.232 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.232 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:58.232 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.232 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:58.232 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2828166 00:04:58.232 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:58.232 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2828166 00:04:58.232 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:58.232 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2828166 00:04:58.232 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:58.232 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2828166 00:04:58.232 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:58.232 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2828166 00:04:58.232 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:58.232 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.232 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:58.232 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.232 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:58.232 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.232 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:58.232 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2828166 00:04:58.232 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:58.232 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.232 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:58.232 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.232 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:58.232 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2828166 00:04:58.232 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:58.232 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.232 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:58.232 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2828166 00:04:58.232 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:58.232 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2828166 00:04:58.232 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:58.232 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.232 13:50:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.232 13:50:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2828166 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2828166 ']' 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2828166 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2828166 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2828166' 00:04:58.232 killing process with pid 2828166 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2828166 00:04:58.232 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2828166 00:04:58.491 00:04:58.491 real 0m1.471s 00:04:58.491 user 0m1.499s 00:04:58.491 sys 0m0.459s 00:04:58.491 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.491 13:50:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.491 ************************************ 00:04:58.491 END TEST dpdk_mem_utility 00:04:58.491 ************************************ 00:04:58.491 13:50:36 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.491 13:50:36 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:58.491 13:50:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.491 13:50:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.491 13:50:36 -- common/autotest_common.sh@10 -- # set +x 00:04:58.768 ************************************ 00:04:58.768 START TEST event 00:04:58.768 ************************************ 00:04:58.768 13:50:36 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:58.768 * Looking for test storage... 00:04:58.768 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:58.768 13:50:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:58.768 13:50:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.768 13:50:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.768 13:50:36 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:58.768 13:50:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.768 13:50:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.768 ************************************ 00:04:58.768 START TEST event_perf 00:04:58.768 ************************************ 00:04:58.768 13:50:36 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.768 Running I/O for 1 seconds...[2024-07-15 13:50:36.752493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:58.768 [2024-07-15 13:50:36.752578] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828412 ] 00:04:58.768 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.028 [2024-07-15 13:50:36.841332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.028 [2024-07-15 13:50:36.926161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.028 [2024-07-15 13:50:36.926302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.028 [2024-07-15 13:50:36.926305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.028 [2024-07-15 13:50:36.926271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.963 Running I/O for 1 seconds... 00:04:59.963 lcore 0: 190803 00:04:59.963 lcore 1: 190801 00:04:59.963 lcore 2: 190800 00:04:59.963 lcore 3: 190801 00:04:59.963 done. 00:04:59.963 00:04:59.963 real 0m1.266s 00:04:59.963 user 0m4.149s 00:04:59.963 sys 0m0.111s 00:04:59.963 13:50:37 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.963 13:50:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.963 ************************************ 00:04:59.963 END TEST event_perf 00:04:59.963 ************************************ 00:05:00.222 13:50:38 event -- common/autotest_common.sh@1142 -- # return 0 00:05:00.222 13:50:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.222 13:50:38 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:00.222 13:50:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.222 13:50:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.222 ************************************ 00:05:00.222 START TEST event_reactor 00:05:00.222 ************************************ 00:05:00.222 13:50:38 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.222 [2024-07-15 13:50:38.098278] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:00.222 [2024-07-15 13:50:38.098377] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828619 ] 00:05:00.222 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.222 [2024-07-15 13:50:38.184014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.222 [2024-07-15 13:50:38.266067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.597 test_start 00:05:01.597 oneshot 00:05:01.597 tick 100 00:05:01.597 tick 100 00:05:01.597 tick 250 00:05:01.597 tick 100 00:05:01.597 tick 100 00:05:01.597 tick 100 00:05:01.597 tick 250 00:05:01.597 tick 500 00:05:01.597 tick 100 00:05:01.597 tick 100 00:05:01.597 tick 250 00:05:01.597 tick 100 00:05:01.597 tick 100 00:05:01.597 test_end 00:05:01.597 00:05:01.597 real 0m1.257s 00:05:01.597 user 0m1.143s 00:05:01.597 sys 0m0.110s 00:05:01.597 13:50:39 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.597 13:50:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:01.597 ************************************ 00:05:01.597 END TEST event_reactor 00:05:01.597 ************************************ 00:05:01.597 13:50:39 event -- common/autotest_common.sh@1142 -- # return 0 00:05:01.597 13:50:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.597 13:50:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:01.597 13:50:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.598 13:50:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.598 ************************************ 00:05:01.598 START TEST event_reactor_perf 00:05:01.598 ************************************ 00:05:01.598 13:50:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.598 [2024-07-15 13:50:39.438616] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:01.598 [2024-07-15 13:50:39.438714] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828819 ] 00:05:01.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.598 [2024-07-15 13:50:39.524352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.598 [2024-07-15 13:50:39.605572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.973 test_start 00:05:02.973 test_end 00:05:02.973 Performance: 934982 events per second 00:05:02.973 00:05:02.973 real 0m1.257s 00:05:02.973 user 0m1.141s 00:05:02.973 sys 0m0.112s 00:05:02.973 13:50:40 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.973 13:50:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.973 ************************************ 00:05:02.973 END TEST event_reactor_perf 00:05:02.973 ************************************ 00:05:02.973 13:50:40 event -- common/autotest_common.sh@1142 -- # return 0 00:05:02.973 13:50:40 event -- event/event.sh@49 -- # uname -s 00:05:02.973 13:50:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.973 13:50:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.973 13:50:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.973 13:50:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.973 13:50:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.973 ************************************ 00:05:02.973 START TEST event_scheduler 00:05:02.973 ************************************ 00:05:02.973 13:50:40 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.973 * Looking for test storage... 00:05:02.973 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:02.973 13:50:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.973 13:50:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2829043 00:05:02.973 13:50:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.973 13:50:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.973 13:50:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2829043 00:05:02.973 13:50:40 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2829043 ']' 00:05:02.973 13:50:40 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.973 13:50:40 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.973 13:50:40 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.973 13:50:40 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.973 13:50:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.973 [2024-07-15 13:50:40.896928] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:02.973 [2024-07-15 13:50:40.897005] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829043 ] 00:05:02.973 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.973 [2024-07-15 13:50:40.984423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.232 [2024-07-15 13:50:41.069483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.232 [2024-07-15 13:50:41.069584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.232 [2024-07-15 13:50:41.069695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.232 [2024-07-15 13:50:41.069696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:03.798 13:50:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.798 [2024-07-15 13:50:41.748129] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:03.798 [2024-07-15 13:50:41.748153] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.798 [2024-07-15 13:50:41.748165] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.798 [2024-07-15 13:50:41.748172] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.798 [2024-07-15 13:50:41.748183] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.798 13:50:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.798 [2024-07-15 13:50:41.823209] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.798 13:50:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.798 13:50:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.798 ************************************ 00:05:03.798 START TEST scheduler_create_thread 00:05:03.798 ************************************ 00:05:03.798 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:03.798 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.798 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.798 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 2 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 3 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 4 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 5 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 6 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 7 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 8 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 9 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 10 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.058 13:50:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.626 13:50:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.626 13:50:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:04.626 13:50:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.626 13:50:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.005 13:50:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.005 13:50:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:06.005 13:50:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:06.005 13:50:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.005 13:50:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.941 13:50:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.941 00:05:06.941 real 0m3.100s 00:05:06.941 user 0m0.025s 00:05:06.941 sys 0m0.007s 00:05:06.941 13:50:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.941 13:50:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.941 ************************************ 00:05:06.941 END TEST scheduler_create_thread 00:05:06.941 ************************************ 00:05:06.941 13:50:45 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:06.941 13:50:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:06.941 13:50:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2829043 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2829043 ']' 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2829043 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2829043 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2829043' 00:05:07.200 killing process with pid 2829043 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2829043 00:05:07.200 13:50:45 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2829043 00:05:07.458 [2024-07-15 13:50:45.346525] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:07.718 00:05:07.718 real 0m4.809s 00:05:07.718 user 0m9.328s 00:05:07.718 sys 0m0.468s 00:05:07.718 13:50:45 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.718 13:50:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.718 ************************************ 00:05:07.718 END TEST event_scheduler 00:05:07.718 ************************************ 00:05:07.718 13:50:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:07.718 13:50:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:07.718 13:50:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:07.718 13:50:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.718 13:50:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.718 13:50:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.718 ************************************ 00:05:07.718 START TEST app_repeat 00:05:07.718 ************************************ 00:05:07.718 13:50:45 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2829659 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2829659' 00:05:07.718 Process app_repeat pid: 2829659 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:07.718 spdk_app_start Round 0 00:05:07.718 13:50:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2829659 /var/tmp/spdk-nbd.sock 00:05:07.718 13:50:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2829659 ']' 00:05:07.718 13:50:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.718 13:50:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.718 13:50:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.718 13:50:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.718 13:50:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.718 [2024-07-15 13:50:45.696422] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:07.718 [2024-07-15 13:50:45.696491] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829659 ] 00:05:07.718 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.718 [2024-07-15 13:50:45.784438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.977 [2024-07-15 13:50:45.874416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.977 [2024-07-15 13:50:45.874418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.544 13:50:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.544 13:50:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:08.544 13:50:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.803 Malloc0 00:05:08.803 13:50:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.061 Malloc1 00:05:09.061 13:50:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.061 13:50:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.061 /dev/nbd0 00:05:09.320 13:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.320 13:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.320 1+0 records in 00:05:09.320 1+0 records out 00:05:09.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252899 s, 16.2 MB/s 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:09.320 13:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.320 13:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.320 13:50:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.320 /dev/nbd1 00:05:09.320 13:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.320 13:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.320 1+0 records in 00:05:09.320 1+0 records out 00:05:09.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027084 s, 15.1 MB/s 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:09.320 13:50:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:09.579 13:50:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:09.579 13:50:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.579 { 00:05:09.579 "nbd_device": "/dev/nbd0", 00:05:09.579 "bdev_name": "Malloc0" 00:05:09.579 }, 00:05:09.579 { 00:05:09.579 "nbd_device": "/dev/nbd1", 00:05:09.579 "bdev_name": "Malloc1" 00:05:09.579 } 00:05:09.579 ]' 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.579 { 00:05:09.579 "nbd_device": "/dev/nbd0", 00:05:09.579 "bdev_name": "Malloc0" 00:05:09.579 }, 00:05:09.579 { 00:05:09.579 "nbd_device": "/dev/nbd1", 00:05:09.579 "bdev_name": "Malloc1" 00:05:09.579 } 00:05:09.579 ]' 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.579 /dev/nbd1' 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.579 /dev/nbd1' 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.579 13:50:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.838 256+0 records in 00:05:09.838 256+0 records out 00:05:09.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105018 s, 99.8 MB/s 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.838 256+0 records in 00:05:09.838 256+0 records out 00:05:09.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208359 s, 50.3 MB/s 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.838 256+0 records in 00:05:09.838 256+0 records out 00:05:09.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223063 s, 47.0 MB/s 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.838 13:50:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.097 13:50:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.097 13:50:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.356 13:50:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.356 13:50:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.615 13:50:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.874 [2024-07-15 13:50:48.749896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.874 [2024-07-15 13:50:48.830347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.874 [2024-07-15 13:50:48.830347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.874 [2024-07-15 13:50:48.878263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.874 [2024-07-15 13:50:48.878306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.217 13:50:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.217 13:50:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:14.217 spdk_app_start Round 1 00:05:14.217 13:50:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2829659 /var/tmp/spdk-nbd.sock 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2829659 ']' 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.217 13:50:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:14.217 13:50:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.217 Malloc0 00:05:14.217 13:50:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.217 Malloc1 00:05:14.217 13:50:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.217 13:50:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.474 /dev/nbd0 00:05:14.474 13:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.474 13:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.474 13:50:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:14.474 13:50:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.474 13:50:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.475 1+0 records in 00:05:14.475 1+0 records out 00:05:14.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249655 s, 16.4 MB/s 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.475 13:50:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.475 13:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.475 13:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.475 13:50:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.475 /dev/nbd1 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.733 1+0 records in 00:05:14.733 1+0 records out 00:05:14.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244925 s, 16.7 MB/s 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.733 13:50:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.733 { 00:05:14.733 "nbd_device": "/dev/nbd0", 00:05:14.733 "bdev_name": "Malloc0" 00:05:14.733 }, 00:05:14.733 { 00:05:14.733 "nbd_device": "/dev/nbd1", 00:05:14.733 "bdev_name": "Malloc1" 00:05:14.733 } 00:05:14.733 ]' 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.733 { 00:05:14.733 "nbd_device": "/dev/nbd0", 00:05:14.733 "bdev_name": "Malloc0" 00:05:14.733 }, 00:05:14.733 { 00:05:14.733 "nbd_device": "/dev/nbd1", 00:05:14.733 "bdev_name": "Malloc1" 00:05:14.733 } 00:05:14.733 ]' 00:05:14.733 13:50:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.992 /dev/nbd1' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.992 /dev/nbd1' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.992 256+0 records in 00:05:14.992 256+0 records out 00:05:14.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106406 s, 98.5 MB/s 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.992 256+0 records in 00:05:14.992 256+0 records out 00:05:14.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209247 s, 50.1 MB/s 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.992 256+0 records in 00:05:14.992 256+0 records out 00:05:14.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222002 s, 47.2 MB/s 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.992 13:50:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.250 13:50:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.508 13:50:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.508 13:50:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.767 13:50:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.025 [2024-07-15 13:50:53.946895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.025 [2024-07-15 13:50:54.029628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.025 [2024-07-15 13:50:54.029633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.025 [2024-07-15 13:50:54.072903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.025 [2024-07-15 13:50:54.072948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.310 13:50:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.310 13:50:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:19.310 spdk_app_start Round 2 00:05:19.310 13:50:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2829659 /var/tmp/spdk-nbd.sock 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2829659 ']' 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.310 13:50:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:19.310 13:50:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.310 Malloc0 00:05:19.310 13:50:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.310 Malloc1 00:05:19.310 13:50:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.310 13:50:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.569 /dev/nbd0 00:05:19.569 13:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.569 13:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.569 1+0 records in 00:05:19.569 1+0 records out 00:05:19.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232934 s, 17.6 MB/s 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.569 13:50:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.570 13:50:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.570 13:50:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.570 13:50:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.570 13:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.570 13:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.570 13:50:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.829 /dev/nbd1 00:05:19.829 13:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.829 13:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.829 1+0 records in 00:05:19.829 1+0 records out 00:05:19.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248577 s, 16.5 MB/s 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.829 13:50:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.829 13:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.829 13:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.829 13:50:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.829 13:50:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.829 13:50:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.088 13:50:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.088 { 00:05:20.088 "nbd_device": "/dev/nbd0", 00:05:20.088 "bdev_name": "Malloc0" 00:05:20.088 }, 00:05:20.088 { 00:05:20.088 "nbd_device": "/dev/nbd1", 00:05:20.088 "bdev_name": "Malloc1" 00:05:20.088 } 00:05:20.088 ]' 00:05:20.088 13:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.088 13:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.088 { 00:05:20.088 "nbd_device": "/dev/nbd0", 00:05:20.088 "bdev_name": "Malloc0" 00:05:20.088 }, 00:05:20.088 { 00:05:20.088 "nbd_device": "/dev/nbd1", 00:05:20.088 "bdev_name": "Malloc1" 00:05:20.088 } 00:05:20.088 ]' 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.088 /dev/nbd1' 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.088 /dev/nbd1' 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.088 256+0 records in 00:05:20.088 256+0 records out 00:05:20.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115116 s, 91.1 MB/s 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.088 256+0 records in 00:05:20.088 256+0 records out 00:05:20.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208132 s, 50.4 MB/s 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.088 256+0 records in 00:05:20.088 256+0 records out 00:05:20.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227299 s, 46.1 MB/s 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.088 13:50:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.089 13:50:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.347 13:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.347 13:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.347 13:50:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.347 13:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.347 13:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.347 13:50:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.347 13:50:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.348 13:50:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.348 13:50:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.348 13:50:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.606 13:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.864 13:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.864 13:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.864 13:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.864 13:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.864 13:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.864 13:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.864 13:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.865 13:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.865 13:50:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.865 13:50:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.865 13:50:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.865 13:50:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.865 13:50:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.124 13:50:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.124 [2024-07-15 13:50:59.154698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.383 [2024-07-15 13:50:59.234592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.383 [2024-07-15 13:50:59.234594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.383 [2024-07-15 13:50:59.282513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.383 [2024-07-15 13:50:59.282569] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.916 13:51:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2829659 /var/tmp/spdk-nbd.sock 00:05:23.916 13:51:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2829659 ']' 00:05:23.916 13:51:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.916 13:51:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.916 13:51:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.916 13:51:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.916 13:51:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:24.175 13:51:02 event.app_repeat -- event/event.sh@39 -- # killprocess 2829659 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2829659 ']' 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2829659 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2829659 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2829659' 00:05:24.175 killing process with pid 2829659 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2829659 00:05:24.175 13:51:02 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2829659 00:05:24.434 spdk_app_start is called in Round 0. 00:05:24.434 Shutdown signal received, stop current app iteration 00:05:24.435 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:24.435 spdk_app_start is called in Round 1. 00:05:24.435 Shutdown signal received, stop current app iteration 00:05:24.435 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:24.435 spdk_app_start is called in Round 2. 00:05:24.435 Shutdown signal received, stop current app iteration 00:05:24.435 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:24.435 spdk_app_start is called in Round 3. 00:05:24.435 Shutdown signal received, stop current app iteration 00:05:24.435 13:51:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:24.435 13:51:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:24.435 00:05:24.435 real 0m16.713s 00:05:24.435 user 0m35.468s 00:05:24.435 sys 0m3.336s 00:05:24.435 13:51:02 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.435 13:51:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.435 ************************************ 00:05:24.435 END TEST app_repeat 00:05:24.435 ************************************ 00:05:24.435 13:51:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.435 13:51:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:24.435 13:51:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:24.435 13:51:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.435 13:51:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.435 13:51:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.435 ************************************ 00:05:24.435 START TEST cpu_locks 00:05:24.435 ************************************ 00:05:24.435 13:51:02 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:24.694 * Looking for test storage... 00:05:24.694 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:24.694 13:51:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:24.694 13:51:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:24.694 13:51:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:24.694 13:51:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:24.694 13:51:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.694 13:51:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.694 13:51:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.694 ************************************ 00:05:24.694 START TEST default_locks 00:05:24.694 ************************************ 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2832358 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2832358 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2832358 ']' 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.694 13:51:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.694 [2024-07-15 13:51:02.645145] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:24.694 [2024-07-15 13:51:02.645231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832358 ] 00:05:24.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.694 [2024-07-15 13:51:02.731240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.953 [2024-07-15 13:51:02.821815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.544 13:51:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.544 13:51:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:25.544 13:51:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2832358 00:05:25.544 13:51:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2832358 00:05:25.544 13:51:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.111 lslocks: write error 00:05:26.111 13:51:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2832358 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2832358 ']' 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2832358 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2832358 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2832358' 00:05:26.112 killing process with pid 2832358 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2832358 00:05:26.112 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2832358 00:05:26.680 13:51:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2832358 00:05:26.680 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:26.680 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2832358 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2832358 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2832358 ']' 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.681 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2832358) - No such process 00:05:26.681 ERROR: process (pid: 2832358) is no longer running 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.681 00:05:26.681 real 0m1.843s 00:05:26.681 user 0m1.891s 00:05:26.681 sys 0m0.668s 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.681 13:51:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.681 ************************************ 00:05:26.681 END TEST default_locks 00:05:26.681 ************************************ 00:05:26.681 13:51:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:26.681 13:51:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:26.681 13:51:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.681 13:51:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.681 13:51:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.681 ************************************ 00:05:26.681 START TEST default_locks_via_rpc 00:05:26.681 ************************************ 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2832716 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2832716 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2832716 ']' 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.681 13:51:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.681 [2024-07-15 13:51:04.566707] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:26.681 [2024-07-15 13:51:04.566771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832716 ] 00:05:26.681 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.681 [2024-07-15 13:51:04.649516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.681 [2024-07-15 13:51:04.739946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2832716 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2832716 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2832716 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2832716 ']' 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2832716 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.616 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2832716 00:05:27.875 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.875 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.875 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2832716' 00:05:27.875 killing process with pid 2832716 00:05:27.875 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2832716 00:05:27.875 13:51:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2832716 00:05:28.134 00:05:28.134 real 0m1.498s 00:05:28.134 user 0m1.536s 00:05:28.134 sys 0m0.555s 00:05:28.134 13:51:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.134 13:51:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.134 ************************************ 00:05:28.134 END TEST default_locks_via_rpc 00:05:28.134 ************************************ 00:05:28.134 13:51:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.134 13:51:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:28.134 13:51:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.134 13:51:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.134 13:51:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.134 ************************************ 00:05:28.134 START TEST non_locking_app_on_locked_coremask 00:05:28.134 ************************************ 00:05:28.134 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:28.134 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.134 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2833196 00:05:28.134 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2833196 /var/tmp/spdk.sock 00:05:28.134 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2833196 ']' 00:05:28.134 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.134 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.135 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.135 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.135 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.135 [2024-07-15 13:51:06.135933] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:28.135 [2024-07-15 13:51:06.135989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833196 ] 00:05:28.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.393 [2024-07-15 13:51:06.220334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.393 [2024-07-15 13:51:06.306901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2833387 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2833387 /var/tmp/spdk2.sock 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2833387 ']' 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.960 13:51:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.960 [2024-07-15 13:51:06.998050] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:28.960 [2024-07-15 13:51:06.998120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833387 ] 00:05:29.218 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.218 [2024-07-15 13:51:07.088438] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.218 [2024-07-15 13:51:07.088462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.218 [2024-07-15 13:51:07.247439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.784 13:51:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.784 13:51:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:29.784 13:51:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2833196 00:05:29.784 13:51:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2833196 00:05:29.784 13:51:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.719 lslocks: write error 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2833196 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2833196 ']' 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2833196 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833196 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833196' 00:05:30.719 killing process with pid 2833196 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2833196 00:05:30.719 13:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2833196 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2833387 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2833387 ']' 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2833387 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833387 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833387' 00:05:31.286 killing process with pid 2833387 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2833387 00:05:31.286 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2833387 00:05:31.853 00:05:31.853 real 0m3.558s 00:05:31.853 user 0m3.745s 00:05:31.853 sys 0m1.129s 00:05:31.853 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.853 13:51:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.853 ************************************ 00:05:31.853 END TEST non_locking_app_on_locked_coremask 00:05:31.853 ************************************ 00:05:31.853 13:51:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:31.853 13:51:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:31.853 13:51:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.853 13:51:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.853 13:51:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.853 ************************************ 00:05:31.853 START TEST locking_app_on_unlocked_coremask 00:05:31.853 ************************************ 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2833787 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2833787 /var/tmp/spdk.sock 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2833787 ']' 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.853 13:51:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.853 [2024-07-15 13:51:09.775564] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:31.853 [2024-07-15 13:51:09.775620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833787 ] 00:05:31.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.853 [2024-07-15 13:51:09.860930] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.853 [2024-07-15 13:51:09.860960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.113 [2024-07-15 13:51:09.952183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.681 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2833804 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2833804 /var/tmp/spdk2.sock 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2833804 ']' 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.682 13:51:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.682 [2024-07-15 13:51:10.612313] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:32.682 [2024-07-15 13:51:10.612375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833804 ] 00:05:32.682 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.682 [2024-07-15 13:51:10.708621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.940 [2024-07-15 13:51:10.885244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.508 13:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.508 13:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:33.508 13:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2833804 00:05:33.508 13:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2833804 00:05:33.508 13:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.445 lslocks: write error 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2833787 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2833787 ']' 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2833787 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833787 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833787' 00:05:34.445 killing process with pid 2833787 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2833787 00:05:34.445 13:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2833787 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2833804 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2833804 ']' 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2833804 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833804 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.381 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833804' 00:05:35.381 killing process with pid 2833804 00:05:35.382 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2833804 00:05:35.382 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2833804 00:05:35.640 00:05:35.640 real 0m3.753s 00:05:35.640 user 0m3.885s 00:05:35.640 sys 0m1.290s 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.640 ************************************ 00:05:35.640 END TEST locking_app_on_unlocked_coremask 00:05:35.640 ************************************ 00:05:35.640 13:51:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:35.640 13:51:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.640 13:51:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.640 13:51:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.640 13:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.640 ************************************ 00:05:35.640 START TEST locking_app_on_locked_coremask 00:05:35.640 ************************************ 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2834291 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2834291 /var/tmp/spdk.sock 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2834291 ']' 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.640 13:51:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.640 [2024-07-15 13:51:13.622572] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:35.640 [2024-07-15 13:51:13.622633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834291 ] 00:05:35.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.640 [2024-07-15 13:51:13.705733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.899 [2024-07-15 13:51:13.795919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2834384 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2834384 /var/tmp/spdk2.sock 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2834384 /var/tmp/spdk2.sock 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2834384 /var/tmp/spdk2.sock 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2834384 ']' 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.466 13:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.466 [2024-07-15 13:51:14.474000] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:36.466 [2024-07-15 13:51:14.474074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834384 ] 00:05:36.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.725 [2024-07-15 13:51:14.571479] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2834291 has claimed it. 00:05:36.725 [2024-07-15 13:51:14.571521] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.294 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2834384) - No such process 00:05:37.294 ERROR: process (pid: 2834384) is no longer running 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2834291 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2834291 00:05:37.294 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.552 lslocks: write error 00:05:37.552 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2834291 00:05:37.552 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2834291 ']' 00:05:37.552 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2834291 00:05:37.552 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:37.811 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.811 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834291 00:05:37.811 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.811 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.811 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834291' 00:05:37.811 killing process with pid 2834291 00:05:37.811 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2834291 00:05:37.811 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2834291 00:05:38.069 00:05:38.069 real 0m2.399s 00:05:38.069 user 0m2.557s 00:05:38.069 sys 0m0.755s 00:05:38.069 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.069 13:51:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.069 ************************************ 00:05:38.069 END TEST locking_app_on_locked_coremask 00:05:38.069 ************************************ 00:05:38.069 13:51:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.069 13:51:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:38.069 13:51:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.069 13:51:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.069 13:51:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.069 ************************************ 00:05:38.069 START TEST locking_overlapped_coremask 00:05:38.069 ************************************ 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2834599 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2834599 /var/tmp/spdk.sock 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2834599 ']' 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.069 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.069 [2024-07-15 13:51:16.102091] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:38.069 [2024-07-15 13:51:16.102152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834599 ] 00:05:38.069 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.329 [2024-07-15 13:51:16.171184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.329 [2024-07-15 13:51:16.264894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.329 [2024-07-15 13:51:16.264994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.329 [2024-07-15 13:51:16.264995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2834783 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2834783 /var/tmp/spdk2.sock 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2834783 /var/tmp/spdk2.sock 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2834783 /var/tmp/spdk2.sock 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2834783 ']' 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.895 13:51:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.153 [2024-07-15 13:51:16.975328] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:39.153 [2024-07-15 13:51:16.975396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834783 ] 00:05:39.153 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.153 [2024-07-15 13:51:17.072091] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2834599 has claimed it. 00:05:39.153 [2024-07-15 13:51:17.072126] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.721 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2834783) - No such process 00:05:39.721 ERROR: process (pid: 2834783) is no longer running 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2834599 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2834599 ']' 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2834599 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834599 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834599' 00:05:39.721 killing process with pid 2834599 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2834599 00:05:39.721 13:51:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2834599 00:05:39.980 00:05:39.980 real 0m1.936s 00:05:39.980 user 0m5.405s 00:05:39.980 sys 0m0.482s 00:05:39.980 13:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.980 13:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.980 ************************************ 00:05:39.980 END TEST locking_overlapped_coremask 00:05:39.980 ************************************ 00:05:40.239 13:51:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.239 13:51:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:40.240 13:51:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.240 13:51:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.240 13:51:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.240 ************************************ 00:05:40.240 START TEST locking_overlapped_coremask_via_rpc 00:05:40.240 ************************************ 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2834993 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2834993 /var/tmp/spdk.sock 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2834993 ']' 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.240 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.240 [2024-07-15 13:51:18.125611] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:40.240 [2024-07-15 13:51:18.125679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834993 ] 00:05:40.240 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.240 [2024-07-15 13:51:18.211893] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.240 [2024-07-15 13:51:18.211926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.240 [2024-07-15 13:51:18.299956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.240 [2024-07-15 13:51:18.300056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.240 [2024-07-15 13:51:18.300057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2835022 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2835022 /var/tmp/spdk2.sock 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2835022 ']' 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.176 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.176 [2024-07-15 13:51:18.982954] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:41.176 [2024-07-15 13:51:18.983022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835022 ] 00:05:41.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.176 [2024-07-15 13:51:19.078701] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.176 [2024-07-15 13:51:19.078733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.176 [2024-07-15 13:51:19.240869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.176 [2024-07-15 13:51:19.244266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.176 [2024-07-15 13:51:19.244268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 [2024-07-15 13:51:19.851281] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2834993 has claimed it. 00:05:42.110 request: 00:05:42.110 { 00:05:42.110 "method": "framework_enable_cpumask_locks", 00:05:42.110 "req_id": 1 00:05:42.110 } 00:05:42.110 Got JSON-RPC error response 00:05:42.110 response: 00:05:42.110 { 00:05:42.110 "code": -32603, 00:05:42.110 "message": "Failed to claim CPU core: 2" 00:05:42.110 } 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2834993 /var/tmp/spdk.sock 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2834993 ']' 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.110 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2835022 /var/tmp/spdk2.sock 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2835022 ']' 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.110 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.368 00:05:42.368 real 0m2.146s 00:05:42.368 user 0m0.857s 00:05:42.368 sys 0m0.220s 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.368 13:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.368 ************************************ 00:05:42.368 END TEST locking_overlapped_coremask_via_rpc 00:05:42.368 ************************************ 00:05:42.368 13:51:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:42.368 13:51:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:42.368 13:51:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2834993 ]] 00:05:42.368 13:51:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2834993 00:05:42.368 13:51:20 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2834993 ']' 00:05:42.368 13:51:20 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2834993 00:05:42.368 13:51:20 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:42.368 13:51:20 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.368 13:51:20 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834993 00:05:42.368 13:51:20 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.369 13:51:20 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.369 13:51:20 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834993' 00:05:42.369 killing process with pid 2834993 00:05:42.369 13:51:20 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2834993 00:05:42.369 13:51:20 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2834993 00:05:42.627 13:51:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2835022 ]] 00:05:42.627 13:51:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2835022 00:05:42.627 13:51:20 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2835022 ']' 00:05:42.627 13:51:20 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2835022 00:05:42.627 13:51:20 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:42.884 13:51:20 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.884 13:51:20 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2835022 00:05:42.884 13:51:20 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:42.884 13:51:20 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:42.884 13:51:20 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2835022' 00:05:42.884 killing process with pid 2835022 00:05:42.884 13:51:20 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2835022 00:05:42.884 13:51:20 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2835022 00:05:43.142 13:51:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.142 13:51:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.142 13:51:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2834993 ]] 00:05:43.142 13:51:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2834993 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2834993 ']' 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2834993 00:05:43.142 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2834993) - No such process 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2834993 is not found' 00:05:43.142 Process with pid 2834993 is not found 00:05:43.142 13:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2835022 ]] 00:05:43.142 13:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2835022 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2835022 ']' 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2835022 00:05:43.142 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2835022) - No such process 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2835022 is not found' 00:05:43.142 Process with pid 2835022 is not found 00:05:43.142 13:51:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.142 00:05:43.142 real 0m18.639s 00:05:43.142 user 0m30.875s 00:05:43.142 sys 0m6.202s 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.142 13:51:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.142 ************************************ 00:05:43.142 END TEST cpu_locks 00:05:43.142 ************************************ 00:05:43.142 13:51:21 event -- common/autotest_common.sh@1142 -- # return 0 00:05:43.142 00:05:43.142 real 0m44.564s 00:05:43.142 user 1m22.319s 00:05:43.142 sys 0m10.798s 00:05:43.142 13:51:21 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.142 13:51:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.142 ************************************ 00:05:43.142 END TEST event 00:05:43.142 ************************************ 00:05:43.142 13:51:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.142 13:51:21 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:43.142 13:51:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.142 13:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.142 13:51:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.401 ************************************ 00:05:43.401 START TEST thread 00:05:43.401 ************************************ 00:05:43.401 13:51:21 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:43.401 * Looking for test storage... 00:05:43.401 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:43.401 13:51:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.401 13:51:21 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:43.401 13:51:21 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.401 13:51:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.401 ************************************ 00:05:43.401 START TEST thread_poller_perf 00:05:43.401 ************************************ 00:05:43.401 13:51:21 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.401 [2024-07-15 13:51:21.402373] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:43.401 [2024-07-15 13:51:21.402477] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835469 ] 00:05:43.401 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.659 [2024-07-15 13:51:21.489486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.659 [2024-07-15 13:51:21.571003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.659 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:44.595 ====================================== 00:05:44.595 busy:2305088270 (cyc) 00:05:44.595 total_run_count: 845000 00:05:44.595 tsc_hz: 2300000000 (cyc) 00:05:44.595 ====================================== 00:05:44.595 poller_cost: 2727 (cyc), 1185 (nsec) 00:05:44.595 00:05:44.595 real 0m1.262s 00:05:44.595 user 0m1.146s 00:05:44.595 sys 0m0.112s 00:05:44.595 13:51:22 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.595 13:51:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.595 ************************************ 00:05:44.595 END TEST thread_poller_perf 00:05:44.595 ************************************ 00:05:44.854 13:51:22 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:44.854 13:51:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.854 13:51:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:44.854 13:51:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.854 13:51:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.854 ************************************ 00:05:44.854 START TEST thread_poller_perf 00:05:44.854 ************************************ 00:05:44.854 13:51:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.854 [2024-07-15 13:51:22.751703] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:44.854 [2024-07-15 13:51:22.751792] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835667 ] 00:05:44.854 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.854 [2024-07-15 13:51:22.839858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.113 [2024-07-15 13:51:22.929849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.113 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.050 ====================================== 00:05:46.050 busy:2301413398 (cyc) 00:05:46.050 total_run_count: 14030000 00:05:46.050 tsc_hz: 2300000000 (cyc) 00:05:46.050 ====================================== 00:05:46.050 poller_cost: 164 (cyc), 71 (nsec) 00:05:46.050 00:05:46.050 real 0m1.268s 00:05:46.050 user 0m1.153s 00:05:46.050 sys 0m0.110s 00:05:46.050 13:51:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.050 13:51:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.050 ************************************ 00:05:46.050 END TEST thread_poller_perf 00:05:46.050 ************************************ 00:05:46.050 13:51:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:46.050 13:51:24 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:46.050 13:51:24 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:46.050 13:51:24 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.050 13:51:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.050 13:51:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.050 ************************************ 00:05:46.050 START TEST thread_spdk_lock 00:05:46.050 ************************************ 00:05:46.050 13:51:24 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:46.050 [2024-07-15 13:51:24.109422] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:46.050 [2024-07-15 13:51:24.109509] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835873 ] 00:05:46.309 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.309 [2024-07-15 13:51:24.200495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.309 [2024-07-15 13:51:24.290763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.309 [2024-07-15 13:51:24.290763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.877 [2024-07-15 13:51:24.782566] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:46.877 [2024-07-15 13:51:24.782602] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:46.877 [2024-07-15 13:51:24.782628] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14ce200 00:05:46.877 [2024-07-15 13:51:24.783522] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:46.877 [2024-07-15 13:51:24.783626] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:46.877 [2024-07-15 13:51:24.783645] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:46.877 Starting test contend 00:05:46.877 Worker Delay Wait us Hold us Total us 00:05:46.877 0 3 173959 185568 359528 00:05:46.877 1 5 88510 286634 375145 00:05:46.877 PASS test contend 00:05:46.877 Starting test hold_by_poller 00:05:46.877 PASS test hold_by_poller 00:05:46.877 Starting test hold_by_message 00:05:46.877 PASS test hold_by_message 00:05:46.877 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:46.877 100014 assertions passed 00:05:46.877 0 assertions failed 00:05:46.877 00:05:46.877 real 0m0.761s 00:05:46.877 user 0m1.136s 00:05:46.877 sys 0m0.113s 00:05:46.877 13:51:24 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.877 13:51:24 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:46.877 ************************************ 00:05:46.877 END TEST thread_spdk_lock 00:05:46.877 ************************************ 00:05:46.877 13:51:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:46.877 00:05:46.877 real 0m3.663s 00:05:46.877 user 0m3.573s 00:05:46.877 sys 0m0.600s 00:05:46.877 13:51:24 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.877 13:51:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.877 ************************************ 00:05:46.877 END TEST thread 00:05:46.877 ************************************ 00:05:46.877 13:51:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.877 13:51:24 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:46.877 13:51:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.877 13:51:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.877 13:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:47.160 ************************************ 00:05:47.160 START TEST accel 00:05:47.160 ************************************ 00:05:47.160 13:51:24 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:47.160 * Looking for test storage... 00:05:47.160 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:47.160 13:51:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:47.160 13:51:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:47.160 13:51:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.160 13:51:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2836111 00:05:47.160 13:51:25 accel -- accel/accel.sh@63 -- # waitforlisten 2836111 00:05:47.160 13:51:25 accel -- common/autotest_common.sh@829 -- # '[' -z 2836111 ']' 00:05:47.161 13:51:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.161 13:51:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.161 13:51:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:47.161 13:51:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:47.161 13:51:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.161 13:51:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.161 13:51:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.161 13:51:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.161 13:51:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.161 13:51:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.161 13:51:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.161 13:51:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.161 13:51:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:47.161 13:51:25 accel -- accel/accel.sh@41 -- # jq -r . 00:05:47.161 [2024-07-15 13:51:25.121448] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:47.161 [2024-07-15 13:51:25.121520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836111 ] 00:05:47.161 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.161 [2024-07-15 13:51:25.206185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.481 [2024-07-15 13:51:25.295738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.049 13:51:25 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.049 13:51:25 accel -- common/autotest_common.sh@862 -- # return 0 00:05:48.049 13:51:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:48.049 13:51:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:48.049 13:51:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:48.049 13:51:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:48.049 13:51:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:48.049 13:51:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:48.049 13:51:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:48.049 13:51:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.049 13:51:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.049 13:51:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.049 13:51:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:48.049 13:51:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:48.049 13:51:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:48.049 13:51:26 accel -- accel/accel.sh@75 -- # killprocess 2836111 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@948 -- # '[' -z 2836111 ']' 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@952 -- # kill -0 2836111 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@953 -- # uname 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2836111 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2836111' 00:05:48.049 killing process with pid 2836111 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@967 -- # kill 2836111 00:05:48.049 13:51:26 accel -- common/autotest_common.sh@972 -- # wait 2836111 00:05:48.618 13:51:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:48.618 13:51:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:48.618 13:51:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:48.618 13:51:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.618 13:51:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.618 13:51:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:48.618 13:51:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:48.618 13:51:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.618 13:51:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:48.618 13:51:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.618 13:51:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:48.618 13:51:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:48.618 13:51:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.618 13:51:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.618 ************************************ 00:05:48.618 START TEST accel_missing_filename 00:05:48.618 ************************************ 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.618 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:48.618 13:51:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:48.618 [2024-07-15 13:51:26.562459] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:48.618 [2024-07-15 13:51:26.562548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836336 ] 00:05:48.618 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.618 [2024-07-15 13:51:26.649475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.877 [2024-07-15 13:51:26.740083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.877 [2024-07-15 13:51:26.786916] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.877 [2024-07-15 13:51:26.856437] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:48.877 A filename is required. 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.877 00:05:48.877 real 0m0.393s 00:05:48.877 user 0m0.262s 00:05:48.877 sys 0m0.170s 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.877 13:51:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:48.877 ************************************ 00:05:48.877 END TEST accel_missing_filename 00:05:48.877 ************************************ 00:05:49.137 13:51:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.137 13:51:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:49.137 13:51:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:49.137 13:51:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.137 13:51:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.137 ************************************ 00:05:49.137 START TEST accel_compress_verify 00:05:49.137 ************************************ 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.137 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:49.137 13:51:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:49.137 [2024-07-15 13:51:27.032946] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:49.137 [2024-07-15 13:51:27.033029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836361 ] 00:05:49.138 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.138 [2024-07-15 13:51:27.119950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.138 [2024-07-15 13:51:27.199614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.397 [2024-07-15 13:51:27.242736] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.397 [2024-07-15 13:51:27.312159] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:49.397 00:05:49.397 Compression does not support the verify option, aborting. 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.397 00:05:49.397 real 0m0.377s 00:05:49.397 user 0m0.261s 00:05:49.397 sys 0m0.153s 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.397 13:51:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:49.397 ************************************ 00:05:49.397 END TEST accel_compress_verify 00:05:49.397 ************************************ 00:05:49.397 13:51:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.397 13:51:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:49.397 13:51:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:49.397 13:51:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.397 13:51:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.397 ************************************ 00:05:49.397 START TEST accel_wrong_workload 00:05:49.397 ************************************ 00:05:49.397 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:49.397 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:49.397 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:49.397 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:49.397 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.397 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:49.656 13:51:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:49.656 Unsupported workload type: foobar 00:05:49.656 [2024-07-15 13:51:27.487111] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:49.656 accel_perf options: 00:05:49.656 [-h help message] 00:05:49.656 [-q queue depth per core] 00:05:49.656 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:49.656 [-T number of threads per core 00:05:49.656 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:49.656 [-t time in seconds] 00:05:49.656 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:49.656 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:49.656 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:49.656 [-l for compress/decompress workloads, name of uncompressed input file 00:05:49.656 [-S for crc32c workload, use this seed value (default 0) 00:05:49.656 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:49.656 [-f for fill workload, use this BYTE value (default 255) 00:05:49.656 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:49.656 [-y verify result if this switch is on] 00:05:49.656 [-a tasks to allocate per core (default: same value as -q)] 00:05:49.656 Can be used to spread operations across a wider range of memory. 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.656 00:05:49.656 real 0m0.029s 00:05:49.656 user 0m0.014s 00:05:49.656 sys 0m0.015s 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.656 13:51:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:49.656 ************************************ 00:05:49.656 END TEST accel_wrong_workload 00:05:49.656 ************************************ 00:05:49.656 Error: writing output failed: Broken pipe 00:05:49.656 13:51:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.656 13:51:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:49.657 13:51:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:49.657 13:51:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.657 13:51:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.657 ************************************ 00:05:49.657 START TEST accel_negative_buffers 00:05:49.657 ************************************ 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:49.657 13:51:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:49.657 -x option must be non-negative. 00:05:49.657 [2024-07-15 13:51:27.587356] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:49.657 accel_perf options: 00:05:49.657 [-h help message] 00:05:49.657 [-q queue depth per core] 00:05:49.657 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:49.657 [-T number of threads per core 00:05:49.657 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:49.657 [-t time in seconds] 00:05:49.657 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:49.657 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:49.657 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:49.657 [-l for compress/decompress workloads, name of uncompressed input file 00:05:49.657 [-S for crc32c workload, use this seed value (default 0) 00:05:49.657 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:49.657 [-f for fill workload, use this BYTE value (default 255) 00:05:49.657 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:49.657 [-y verify result if this switch is on] 00:05:49.657 [-a tasks to allocate per core (default: same value as -q)] 00:05:49.657 Can be used to spread operations across a wider range of memory. 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.657 00:05:49.657 real 0m0.029s 00:05:49.657 user 0m0.013s 00:05:49.657 sys 0m0.015s 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.657 13:51:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:49.657 ************************************ 00:05:49.657 END TEST accel_negative_buffers 00:05:49.657 ************************************ 00:05:49.657 Error: writing output failed: Broken pipe 00:05:49.657 13:51:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.657 13:51:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:49.657 13:51:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:49.657 13:51:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.657 13:51:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.657 ************************************ 00:05:49.657 START TEST accel_crc32c 00:05:49.657 ************************************ 00:05:49.657 13:51:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:49.657 13:51:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:49.657 [2024-07-15 13:51:27.694005] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:49.657 [2024-07-15 13:51:27.694077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836593 ] 00:05:49.916 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.916 [2024-07-15 13:51:27.783094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.916 [2024-07-15 13:51:27.865299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.916 13:51:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:51.294 13:51:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.294 00:05:51.294 real 0m1.392s 00:05:51.294 user 0m1.247s 00:05:51.294 sys 0m0.159s 00:05:51.294 13:51:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.294 13:51:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:51.294 ************************************ 00:05:51.294 END TEST accel_crc32c 00:05:51.294 ************************************ 00:05:51.294 13:51:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.294 13:51:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:51.294 13:51:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.294 13:51:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.294 13:51:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.294 ************************************ 00:05:51.294 START TEST accel_crc32c_C2 00:05:51.294 ************************************ 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:51.294 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:51.294 [2024-07-15 13:51:29.166056] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:51.294 [2024-07-15 13:51:29.166144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836791 ] 00:05:51.294 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.294 [2024-07-15 13:51:29.250168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.294 [2024-07-15 13:51:29.333193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.553 13:51:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.490 00:05:52.490 real 0m1.387s 00:05:52.490 user 0m1.245s 00:05:52.490 sys 0m0.156s 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.490 13:51:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:52.490 ************************************ 00:05:52.490 END TEST accel_crc32c_C2 00:05:52.490 ************************************ 00:05:52.750 13:51:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.750 13:51:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:52.750 13:51:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:52.750 13:51:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.750 13:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.750 ************************************ 00:05:52.750 START TEST accel_copy 00:05:52.750 ************************************ 00:05:52.750 13:51:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:52.750 13:51:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:52.750 [2024-07-15 13:51:30.634621] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:52.750 [2024-07-15 13:51:30.634704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837002 ] 00:05:52.750 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.750 [2024-07-15 13:51:30.721912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.750 [2024-07-15 13:51:30.800495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.009 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.010 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.010 13:51:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.010 13:51:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.010 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.010 13:51:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:53.946 13:51:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.946 00:05:53.946 real 0m1.369s 00:05:53.946 user 0m1.228s 00:05:53.946 sys 0m0.155s 00:05:53.946 13:51:31 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.946 13:51:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:53.946 ************************************ 00:05:53.946 END TEST accel_copy 00:05:53.946 ************************************ 00:05:54.205 13:51:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.205 13:51:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.205 13:51:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:54.205 13:51:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.205 13:51:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.205 ************************************ 00:05:54.205 START TEST accel_fill 00:05:54.205 ************************************ 00:05:54.205 13:51:32 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:54.205 13:51:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:54.205 [2024-07-15 13:51:32.086245] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:54.205 [2024-07-15 13:51:32.086330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837200 ] 00:05:54.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.205 [2024-07-15 13:51:32.154903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.205 [2024-07-15 13:51:32.237905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.464 13:51:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:55.400 13:51:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.400 00:05:55.400 real 0m1.375s 00:05:55.400 user 0m1.239s 00:05:55.400 sys 0m0.148s 00:05:55.400 13:51:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.400 13:51:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:55.400 ************************************ 00:05:55.400 END TEST accel_fill 00:05:55.400 ************************************ 00:05:55.659 13:51:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.659 13:51:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:55.659 13:51:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:55.659 13:51:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.659 13:51:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.659 ************************************ 00:05:55.659 START TEST accel_copy_crc32c 00:05:55.659 ************************************ 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:55.659 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:55.659 [2024-07-15 13:51:33.540230] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:55.659 [2024-07-15 13:51:33.540314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837405 ] 00:05:55.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.659 [2024-07-15 13:51:33.627569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.659 [2024-07-15 13:51:33.710496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.977 13:51:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.911 00:05:56.911 real 0m1.390s 00:05:56.911 user 0m1.250s 00:05:56.911 sys 0m0.153s 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.911 13:51:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:56.911 ************************************ 00:05:56.911 END TEST accel_copy_crc32c 00:05:56.911 ************************************ 00:05:56.911 13:51:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.911 13:51:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:56.911 13:51:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:56.911 13:51:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.911 13:51:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.169 ************************************ 00:05:57.169 START TEST accel_copy_crc32c_C2 00:05:57.169 ************************************ 00:05:57.169 13:51:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:57.169 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.169 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:57.169 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.169 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.169 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:57.169 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:57.170 13:51:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:57.170 [2024-07-15 13:51:35.014061] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:57.170 [2024-07-15 13:51:35.014136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837605 ] 00:05:57.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.170 [2024-07-15 13:51:35.100976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.170 [2024-07-15 13:51:35.190033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.170 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.428 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.429 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.429 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.429 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.429 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.429 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.429 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.429 13:51:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.365 00:05:58.365 real 0m1.397s 00:05:58.365 user 0m1.248s 00:05:58.365 sys 0m0.163s 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.365 13:51:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:58.365 ************************************ 00:05:58.365 END TEST accel_copy_crc32c_C2 00:05:58.365 ************************************ 00:05:58.365 13:51:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.365 13:51:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:58.365 13:51:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:58.365 13:51:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.365 13:51:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.625 ************************************ 00:05:58.625 START TEST accel_dualcast 00:05:58.625 ************************************ 00:05:58.625 13:51:36 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:58.625 [2024-07-15 13:51:36.490655] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:58.625 [2024-07-15 13:51:36.490742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837810 ] 00:05:58.625 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.625 [2024-07-15 13:51:36.563049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.625 [2024-07-15 13:51:36.642597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.625 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.884 13:51:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:59.819 13:51:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.819 00:05:59.819 real 0m1.360s 00:05:59.819 user 0m1.228s 00:05:59.819 sys 0m0.144s 00:05:59.819 13:51:37 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.819 13:51:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:59.819 ************************************ 00:05:59.819 END TEST accel_dualcast 00:05:59.819 ************************************ 00:05:59.819 13:51:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.819 13:51:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:59.819 13:51:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.819 13:51:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.819 13:51:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.078 ************************************ 00:06:00.078 START TEST accel_compare 00:06:00.078 ************************************ 00:06:00.078 13:51:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:00.078 13:51:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:00.078 [2024-07-15 13:51:37.935205] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:00.078 [2024-07-15 13:51:37.935299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838012 ] 00:06:00.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.078 [2024-07-15 13:51:38.018212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.078 [2024-07-15 13:51:38.106431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:00.337 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.338 13:51:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:01.274 13:51:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.274 00:06:01.274 real 0m1.384s 00:06:01.274 user 0m1.245s 00:06:01.274 sys 0m0.153s 00:06:01.274 13:51:39 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.274 13:51:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:01.274 ************************************ 00:06:01.274 END TEST accel_compare 00:06:01.274 ************************************ 00:06:01.274 13:51:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.274 13:51:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:01.274 13:51:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.274 13:51:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.274 13:51:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.533 ************************************ 00:06:01.533 START TEST accel_xor 00:06:01.533 ************************************ 00:06:01.533 13:51:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:01.533 13:51:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:01.533 [2024-07-15 13:51:39.402617] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:01.533 [2024-07-15 13:51:39.402702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838220 ] 00:06:01.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.533 [2024-07-15 13:51:39.479243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.533 [2024-07-15 13:51:39.560772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.792 13:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.728 13:51:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:02.729 13:51:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.729 00:06:02.729 real 0m1.379s 00:06:02.729 user 0m1.243s 00:06:02.729 sys 0m0.149s 00:06:02.729 13:51:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.729 13:51:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:02.729 ************************************ 00:06:02.729 END TEST accel_xor 00:06:02.729 ************************************ 00:06:02.729 13:51:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.729 13:51:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:02.988 13:51:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.988 13:51:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.988 13:51:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.988 ************************************ 00:06:02.988 START TEST accel_xor 00:06:02.988 ************************************ 00:06:02.988 13:51:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:02.988 13:51:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:02.988 13:51:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:02.988 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.988 13:51:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.988 13:51:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:02.988 13:51:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:02.988 13:51:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:02.989 13:51:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.989 13:51:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.989 13:51:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.989 13:51:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.989 13:51:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.989 13:51:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:02.989 13:51:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:02.989 [2024-07-15 13:51:40.864768] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:02.989 [2024-07-15 13:51:40.864855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838418 ] 00:06:02.989 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.989 [2024-07-15 13:51:40.952522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.989 [2024-07-15 13:51:41.034438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.247 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.247 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 13:51:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.184 13:51:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.184 13:51:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.184 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.184 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.184 13:51:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.184 13:51:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:04.185 13:51:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.185 00:06:04.185 real 0m1.390s 00:06:04.185 user 0m1.243s 00:06:04.185 sys 0m0.160s 00:06:04.185 13:51:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.185 13:51:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:04.185 ************************************ 00:06:04.185 END TEST accel_xor 00:06:04.185 ************************************ 00:06:04.443 13:51:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.443 13:51:42 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:04.443 13:51:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:04.443 13:51:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.443 13:51:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.443 ************************************ 00:06:04.443 START TEST accel_dif_verify 00:06:04.443 ************************************ 00:06:04.443 13:51:42 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:04.443 13:51:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:04.443 [2024-07-15 13:51:42.337711] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:04.443 [2024-07-15 13:51:42.337811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838617 ] 00:06:04.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.443 [2024-07-15 13:51:42.426575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.701 [2024-07-15 13:51:42.516854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.701 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.702 13:51:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:06.077 13:51:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.077 00:06:06.077 real 0m1.398s 00:06:06.077 user 0m1.251s 00:06:06.077 sys 0m0.162s 00:06:06.077 13:51:43 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.077 13:51:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:06.077 ************************************ 00:06:06.077 END TEST accel_dif_verify 00:06:06.077 ************************************ 00:06:06.077 13:51:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.077 13:51:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:06.077 13:51:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:06.077 13:51:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.077 13:51:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.077 ************************************ 00:06:06.077 START TEST accel_dif_generate 00:06:06.077 ************************************ 00:06:06.077 13:51:43 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:06.077 13:51:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:06.077 [2024-07-15 13:51:43.818575] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:06.077 [2024-07-15 13:51:43.818650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838827 ] 00:06:06.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.077 [2024-07-15 13:51:43.905387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.077 [2024-07-15 13:51:43.985591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.077 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.078 13:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:07.456 13:51:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.456 00:06:07.456 real 0m1.370s 00:06:07.456 user 0m1.235s 00:06:07.456 sys 0m0.152s 00:06:07.456 13:51:45 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.457 13:51:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:07.457 ************************************ 00:06:07.457 END TEST accel_dif_generate 00:06:07.457 ************************************ 00:06:07.457 13:51:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.457 13:51:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:07.457 13:51:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:07.457 13:51:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.457 13:51:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.457 ************************************ 00:06:07.457 START TEST accel_dif_generate_copy 00:06:07.457 ************************************ 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:07.457 [2024-07-15 13:51:45.270554] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:07.457 [2024-07-15 13:51:45.270630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839027 ] 00:06:07.457 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.457 [2024-07-15 13:51:45.360320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.457 [2024-07-15 13:51:45.444620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.457 13:51:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.834 00:06:08.834 real 0m1.393s 00:06:08.834 user 0m1.249s 00:06:08.834 sys 0m0.158s 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.834 13:51:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:08.834 ************************************ 00:06:08.834 END TEST accel_dif_generate_copy 00:06:08.834 ************************************ 00:06:08.834 13:51:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.834 13:51:46 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:08.834 13:51:46 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:08.834 13:51:46 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:08.834 13:51:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.834 13:51:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.834 ************************************ 00:06:08.834 START TEST accel_comp 00:06:08.834 ************************************ 00:06:08.834 13:51:46 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:08.834 13:51:46 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:08.834 13:51:46 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:08.834 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.834 13:51:46 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:08.834 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:08.835 13:51:46 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:08.835 [2024-07-15 13:51:46.741255] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:08.835 [2024-07-15 13:51:46.741351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839236 ] 00:06:08.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.835 [2024-07-15 13:51:46.829974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.093 [2024-07-15 13:51:46.914593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.093 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.094 13:51:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:10.470 13:51:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.470 00:06:10.470 real 0m1.392s 00:06:10.470 user 0m1.243s 00:06:10.470 sys 0m0.164s 00:06:10.470 13:51:48 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.470 13:51:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:10.470 ************************************ 00:06:10.470 END TEST accel_comp 00:06:10.470 ************************************ 00:06:10.470 13:51:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.470 13:51:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.470 13:51:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.470 13:51:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.470 13:51:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.470 ************************************ 00:06:10.470 START TEST accel_decomp 00:06:10.470 ************************************ 00:06:10.470 13:51:48 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:10.470 [2024-07-15 13:51:48.220427] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:10.470 [2024-07-15 13:51:48.220509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839462 ] 00:06:10.470 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.470 [2024-07-15 13:51:48.306556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.470 [2024-07-15 13:51:48.388990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.470 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.471 13:51:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:11.848 13:51:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.848 00:06:11.848 real 0m1.392s 00:06:11.848 user 0m1.248s 00:06:11.848 sys 0m0.159s 00:06:11.848 13:51:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.848 13:51:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:11.848 ************************************ 00:06:11.848 END TEST accel_decomp 00:06:11.848 ************************************ 00:06:11.848 13:51:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.848 13:51:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:11.848 13:51:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:11.848 13:51:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.848 13:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.848 ************************************ 00:06:11.848 START TEST accel_decomp_full 00:06:11.848 ************************************ 00:06:11.848 13:51:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:11.848 [2024-07-15 13:51:49.693312] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:11.848 [2024-07-15 13:51:49.693395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839703 ] 00:06:11.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.848 [2024-07-15 13:51:49.779054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.848 [2024-07-15 13:51:49.858001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.848 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.849 13:51:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:13.224 13:51:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.224 00:06:13.224 real 0m1.384s 00:06:13.224 user 0m1.248s 00:06:13.224 sys 0m0.150s 00:06:13.224 13:51:51 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.224 13:51:51 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:13.224 ************************************ 00:06:13.224 END TEST accel_decomp_full 00:06:13.224 ************************************ 00:06:13.224 13:51:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.224 13:51:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.224 13:51:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:13.224 13:51:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.224 13:51:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.224 ************************************ 00:06:13.224 START TEST accel_decomp_mcore 00:06:13.224 ************************************ 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:13.224 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:13.224 [2024-07-15 13:51:51.160857] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:13.224 [2024-07-15 13:51:51.160937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839945 ] 00:06:13.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.224 [2024-07-15 13:51:51.250316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.483 [2024-07-15 13:51:51.336758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.483 [2024-07-15 13:51:51.336862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.483 [2024-07-15 13:51:51.336961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.483 [2024-07-15 13:51:51.336962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.483 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.484 13:51:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.860 00:06:14.860 real 0m1.409s 00:06:14.860 user 0m4.627s 00:06:14.860 sys 0m0.171s 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.860 13:51:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:14.860 ************************************ 00:06:14.860 END TEST accel_decomp_mcore 00:06:14.860 ************************************ 00:06:14.860 13:51:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.860 13:51:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:14.860 13:51:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:14.860 13:51:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.860 13:51:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.860 ************************************ 00:06:14.860 START TEST accel_decomp_full_mcore 00:06:14.860 ************************************ 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:14.860 [2024-07-15 13:51:52.652465] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:14.860 [2024-07-15 13:51:52.652542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840195 ] 00:06:14.860 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.860 [2024-07-15 13:51:52.727565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.860 [2024-07-15 13:51:52.812771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.860 [2024-07-15 13:51:52.812873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.860 [2024-07-15 13:51:52.812972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.860 [2024-07-15 13:51:52.812973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:14.860 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.861 13:51:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.244 00:06:16.244 real 0m1.402s 00:06:16.244 user 0m4.658s 00:06:16.244 sys 0m0.159s 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.244 13:51:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:16.244 ************************************ 00:06:16.244 END TEST accel_decomp_full_mcore 00:06:16.244 ************************************ 00:06:16.244 13:51:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.244 13:51:54 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.244 13:51:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:16.244 13:51:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.244 13:51:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.244 ************************************ 00:06:16.244 START TEST accel_decomp_mthread 00:06:16.244 ************************************ 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:16.244 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:16.244 [2024-07-15 13:51:54.136045] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.244 [2024-07-15 13:51:54.136121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840414 ] 00:06:16.244 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.244 [2024-07-15 13:51:54.225686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.244 [2024-07-15 13:51:54.308442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.504 13:51:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.440 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.699 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.699 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.699 13:51:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.699 00:06:17.699 real 0m1.398s 00:06:17.699 user 0m1.247s 00:06:17.699 sys 0m0.167s 00:06:17.699 13:51:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.699 13:51:55 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:17.699 ************************************ 00:06:17.699 END TEST accel_decomp_mthread 00:06:17.699 ************************************ 00:06:17.699 13:51:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.699 13:51:55 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.699 13:51:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:17.699 13:51:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.699 13:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.699 ************************************ 00:06:17.699 START TEST accel_decomp_full_mthread 00:06:17.699 ************************************ 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:17.699 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:17.699 [2024-07-15 13:51:55.609920] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:17.699 [2024-07-15 13:51:55.609991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840614 ] 00:06:17.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.699 [2024-07-15 13:51:55.697208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.959 [2024-07-15 13:51:55.782633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.959 13:51:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.338 13:51:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.338 00:06:19.338 real 0m1.412s 00:06:19.338 user 0m1.266s 00:06:19.338 sys 0m0.160s 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.338 13:51:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:19.338 ************************************ 00:06:19.338 END TEST accel_decomp_full_mthread 00:06:19.338 ************************************ 00:06:19.338 13:51:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.338 13:51:57 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:19.338 13:51:57 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:19.338 13:51:57 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:19.338 13:51:57 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:19.338 13:51:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.338 13:51:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.338 13:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.338 13:51:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.338 13:51:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.338 13:51:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.338 13:51:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.338 13:51:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:19.338 13:51:57 accel -- accel/accel.sh@41 -- # jq -r . 00:06:19.338 ************************************ 00:06:19.338 START TEST accel_dif_functional_tests 00:06:19.338 ************************************ 00:06:19.338 13:51:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:19.338 [2024-07-15 13:51:57.116493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.338 [2024-07-15 13:51:57.116576] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840824 ] 00:06:19.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.338 [2024-07-15 13:51:57.202523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.338 [2024-07-15 13:51:57.285292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.338 [2024-07-15 13:51:57.285391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.338 [2024-07-15 13:51:57.285391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.338 00:06:19.338 00:06:19.338 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.338 http://cunit.sourceforge.net/ 00:06:19.338 00:06:19.338 00:06:19.338 Suite: accel_dif 00:06:19.338 Test: verify: DIF generated, GUARD check ...passed 00:06:19.338 Test: verify: DIF generated, APPTAG check ...passed 00:06:19.338 Test: verify: DIF generated, REFTAG check ...passed 00:06:19.338 Test: verify: DIF not generated, GUARD check ...[2024-07-15 13:51:57.364991] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:19.338 passed 00:06:19.338 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:51:57.365049] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:19.338 passed 00:06:19.338 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 13:51:57.365092] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:19.338 passed 00:06:19.338 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:19.338 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 13:51:57.365142] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:19.338 passed 00:06:19.338 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:19.338 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:19.338 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:19.339 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 13:51:57.365245] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:19.339 passed 00:06:19.339 Test: verify copy: DIF generated, GUARD check ...passed 00:06:19.339 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:19.339 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:19.339 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 13:51:57.365356] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:19.339 passed 00:06:19.339 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:51:57.365386] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:19.339 passed 00:06:19.339 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 13:51:57.365413] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:19.339 passed 00:06:19.339 Test: generate copy: DIF generated, GUARD check ...passed 00:06:19.339 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:19.339 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:19.339 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:19.339 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:19.339 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:19.339 Test: generate copy: iovecs-len validate ...[2024-07-15 13:51:57.365594] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:19.339 passed 00:06:19.339 Test: generate copy: buffer alignment validate ...passed 00:06:19.339 00:06:19.339 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.339 suites 1 1 n/a 0 0 00:06:19.339 tests 26 26 26 0 0 00:06:19.339 asserts 115 115 115 0 n/a 00:06:19.339 00:06:19.339 Elapsed time = 0.002 seconds 00:06:19.598 00:06:19.598 real 0m0.455s 00:06:19.598 user 0m0.639s 00:06:19.598 sys 0m0.182s 00:06:19.598 13:51:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.598 13:51:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:19.598 ************************************ 00:06:19.598 END TEST accel_dif_functional_tests 00:06:19.598 ************************************ 00:06:19.598 13:51:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.598 00:06:19.598 real 0m32.609s 00:06:19.598 user 0m35.188s 00:06:19.598 sys 0m5.675s 00:06:19.598 13:51:57 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.598 13:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.598 ************************************ 00:06:19.598 END TEST accel 00:06:19.598 ************************************ 00:06:19.598 13:51:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.598 13:51:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:19.598 13:51:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.598 13:51:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.598 13:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.856 ************************************ 00:06:19.856 START TEST accel_rpc 00:06:19.856 ************************************ 00:06:19.856 13:51:57 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:19.856 * Looking for test storage... 00:06:19.856 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:19.856 13:51:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.856 13:51:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2840894 00:06:19.856 13:51:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2840894 00:06:19.856 13:51:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:19.856 13:51:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2840894 ']' 00:06:19.856 13:51:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.856 13:51:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.856 13:51:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.856 13:51:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.856 13:51:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.856 [2024-07-15 13:51:57.817436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.856 [2024-07-15 13:51:57.817504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840894 ] 00:06:19.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.856 [2024-07-15 13:51:57.902174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.114 [2024-07-15 13:51:57.991007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.682 13:51:58 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.682 13:51:58 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.682 13:51:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:20.682 13:51:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:20.682 13:51:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:20.682 13:51:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:20.682 13:51:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:20.682 13:51:58 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.682 13:51:58 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.682 13:51:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.682 ************************************ 00:06:20.682 START TEST accel_assign_opcode 00:06:20.682 ************************************ 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.682 [2024-07-15 13:51:58.689074] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.682 [2024-07-15 13:51:58.697082] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.682 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.011 software 00:06:21.011 00:06:21.011 real 0m0.263s 00:06:21.011 user 0m0.043s 00:06:21.011 sys 0m0.018s 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.011 13:51:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.011 ************************************ 00:06:21.011 END TEST accel_assign_opcode 00:06:21.011 ************************************ 00:06:21.011 13:51:58 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:21.011 13:51:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2840894 00:06:21.011 13:51:58 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2840894 ']' 00:06:21.011 13:51:58 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2840894 00:06:21.011 13:51:58 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:21.011 13:51:58 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.011 13:51:58 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2840894 00:06:21.011 13:51:59 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.011 13:51:59 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.011 13:51:59 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2840894' 00:06:21.011 killing process with pid 2840894 00:06:21.011 13:51:59 accel_rpc -- common/autotest_common.sh@967 -- # kill 2840894 00:06:21.011 13:51:59 accel_rpc -- common/autotest_common.sh@972 -- # wait 2840894 00:06:21.579 00:06:21.579 real 0m1.703s 00:06:21.579 user 0m1.709s 00:06:21.579 sys 0m0.532s 00:06:21.579 13:51:59 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.579 13:51:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.579 ************************************ 00:06:21.579 END TEST accel_rpc 00:06:21.579 ************************************ 00:06:21.579 13:51:59 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.579 13:51:59 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.579 13:51:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.579 13:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.579 13:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:21.579 ************************************ 00:06:21.579 START TEST app_cmdline 00:06:21.579 ************************************ 00:06:21.579 13:51:59 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.579 * Looking for test storage... 00:06:21.579 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:21.579 13:51:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.579 13:51:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2841297 00:06:21.579 13:51:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2841297 00:06:21.579 13:51:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.579 13:51:59 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2841297 ']' 00:06:21.579 13:51:59 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.579 13:51:59 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.579 13:51:59 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.579 13:51:59 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.579 13:51:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.579 [2024-07-15 13:51:59.603848] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:21.579 [2024-07-15 13:51:59.603943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841297 ] 00:06:21.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.837 [2024-07-15 13:51:59.687657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.837 [2024-07-15 13:51:59.776206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.406 13:52:00 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.406 13:52:00 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:22.406 13:52:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.665 { 00:06:22.665 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:06:22.665 "fields": { 00:06:22.665 "major": 24, 00:06:22.665 "minor": 9, 00:06:22.665 "patch": 0, 00:06:22.665 "suffix": "-pre", 00:06:22.665 "commit": "2728651ee" 00:06:22.665 } 00:06:22.665 } 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.665 13:52:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.665 13:52:00 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.924 request: 00:06:22.924 { 00:06:22.924 "method": "env_dpdk_get_mem_stats", 00:06:22.924 "req_id": 1 00:06:22.924 } 00:06:22.924 Got JSON-RPC error response 00:06:22.924 response: 00:06:22.924 { 00:06:22.924 "code": -32601, 00:06:22.924 "message": "Method not found" 00:06:22.924 } 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.924 13:52:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2841297 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2841297 ']' 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2841297 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2841297 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2841297' 00:06:22.924 killing process with pid 2841297 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@967 -- # kill 2841297 00:06:22.924 13:52:00 app_cmdline -- common/autotest_common.sh@972 -- # wait 2841297 00:06:23.183 00:06:23.183 real 0m1.767s 00:06:23.183 user 0m1.996s 00:06:23.183 sys 0m0.570s 00:06:23.183 13:52:01 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.183 13:52:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.183 ************************************ 00:06:23.183 END TEST app_cmdline 00:06:23.183 ************************************ 00:06:23.441 13:52:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.442 13:52:01 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:23.442 13:52:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.442 13:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.442 13:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:23.442 ************************************ 00:06:23.442 START TEST version 00:06:23.442 ************************************ 00:06:23.442 13:52:01 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:23.442 * Looking for test storage... 00:06:23.442 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:23.442 13:52:01 version -- app/version.sh@17 -- # get_header_version major 00:06:23.442 13:52:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # cut -f2 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.442 13:52:01 version -- app/version.sh@17 -- # major=24 00:06:23.442 13:52:01 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.442 13:52:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # cut -f2 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.442 13:52:01 version -- app/version.sh@18 -- # minor=9 00:06:23.442 13:52:01 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.442 13:52:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # cut -f2 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.442 13:52:01 version -- app/version.sh@19 -- # patch=0 00:06:23.442 13:52:01 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.442 13:52:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # cut -f2 00:06:23.442 13:52:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.442 13:52:01 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.442 13:52:01 version -- app/version.sh@22 -- # version=24.9 00:06:23.442 13:52:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.442 13:52:01 version -- app/version.sh@28 -- # version=24.9rc0 00:06:23.442 13:52:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:23.442 13:52:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.442 13:52:01 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:23.442 13:52:01 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:23.442 00:06:23.442 real 0m0.191s 00:06:23.442 user 0m0.097s 00:06:23.442 sys 0m0.146s 00:06:23.442 13:52:01 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.442 13:52:01 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.442 ************************************ 00:06:23.442 END TEST version 00:06:23.442 ************************************ 00:06:23.701 13:52:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.701 13:52:01 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@198 -- # uname -s 00:06:23.701 13:52:01 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:23.701 13:52:01 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:23.701 13:52:01 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:23.701 13:52:01 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:23.701 13:52:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.701 13:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:23.701 13:52:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:23.701 13:52:01 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:23.701 13:52:01 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:23.701 13:52:01 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:23.701 13:52:01 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:23.701 13:52:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.701 13:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.701 13:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:23.701 ************************************ 00:06:23.701 START TEST llvm_fuzz 00:06:23.701 ************************************ 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:23.701 * Looking for test storage... 00:06:23.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:23.701 13:52:01 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.701 13:52:01 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:23.961 ************************************ 00:06:23.961 START TEST nvmf_llvm_fuzz 00:06:23.961 ************************************ 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:23.961 * Looking for test storage... 00:06:23.961 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:23.961 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:23.961 #define SPDK_CONFIG_H 00:06:23.961 #define SPDK_CONFIG_APPS 1 00:06:23.961 #define SPDK_CONFIG_ARCH native 00:06:23.961 #undef SPDK_CONFIG_ASAN 00:06:23.961 #undef SPDK_CONFIG_AVAHI 00:06:23.961 #undef SPDK_CONFIG_CET 00:06:23.961 #define SPDK_CONFIG_COVERAGE 1 00:06:23.961 #define SPDK_CONFIG_CROSS_PREFIX 00:06:23.961 #undef SPDK_CONFIG_CRYPTO 00:06:23.961 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:23.961 #undef SPDK_CONFIG_CUSTOMOCF 00:06:23.961 #undef SPDK_CONFIG_DAOS 00:06:23.961 #define SPDK_CONFIG_DAOS_DIR 00:06:23.962 #define SPDK_CONFIG_DEBUG 1 00:06:23.962 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:23.962 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:23.962 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:23.962 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:23.962 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:23.962 #undef SPDK_CONFIG_DPDK_UADK 00:06:23.962 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:23.962 #define SPDK_CONFIG_EXAMPLES 1 00:06:23.962 #undef SPDK_CONFIG_FC 00:06:23.962 #define SPDK_CONFIG_FC_PATH 00:06:23.962 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:23.962 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:23.962 #undef SPDK_CONFIG_FUSE 00:06:23.962 #define SPDK_CONFIG_FUZZER 1 00:06:23.962 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:23.962 #undef SPDK_CONFIG_GOLANG 00:06:23.962 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:23.962 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:23.962 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:23.962 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:23.962 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:23.962 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:23.962 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:23.962 #define SPDK_CONFIG_IDXD 1 00:06:23.962 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:23.962 #undef SPDK_CONFIG_IPSEC_MB 00:06:23.962 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:23.962 #define SPDK_CONFIG_ISAL 1 00:06:23.962 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:23.962 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:23.962 #define SPDK_CONFIG_LIBDIR 00:06:23.962 #undef SPDK_CONFIG_LTO 00:06:23.962 #define SPDK_CONFIG_MAX_LCORES 128 00:06:23.962 #define SPDK_CONFIG_NVME_CUSE 1 00:06:23.962 #undef SPDK_CONFIG_OCF 00:06:23.962 #define SPDK_CONFIG_OCF_PATH 00:06:23.962 #define SPDK_CONFIG_OPENSSL_PATH 00:06:23.962 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:23.962 #define SPDK_CONFIG_PGO_DIR 00:06:23.962 #undef SPDK_CONFIG_PGO_USE 00:06:23.962 #define SPDK_CONFIG_PREFIX /usr/local 00:06:23.962 #undef SPDK_CONFIG_RAID5F 00:06:23.962 #undef SPDK_CONFIG_RBD 00:06:23.962 #define SPDK_CONFIG_RDMA 1 00:06:23.962 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:23.962 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:23.962 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:23.962 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:23.962 #undef SPDK_CONFIG_SHARED 00:06:23.962 #undef SPDK_CONFIG_SMA 00:06:23.962 #define SPDK_CONFIG_TESTS 1 00:06:23.962 #undef SPDK_CONFIG_TSAN 00:06:23.962 #define SPDK_CONFIG_UBLK 1 00:06:23.962 #define SPDK_CONFIG_UBSAN 1 00:06:23.962 #undef SPDK_CONFIG_UNIT_TESTS 00:06:23.962 #undef SPDK_CONFIG_URING 00:06:23.962 #define SPDK_CONFIG_URING_PATH 00:06:23.962 #undef SPDK_CONFIG_URING_ZNS 00:06:23.962 #undef SPDK_CONFIG_USDT 00:06:23.962 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:23.962 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:23.962 #define SPDK_CONFIG_VFIO_USER 1 00:06:23.962 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:23.962 #define SPDK_CONFIG_VHOST 1 00:06:23.962 #define SPDK_CONFIG_VIRTIO 1 00:06:23.962 #undef SPDK_CONFIG_VTUNE 00:06:23.962 #define SPDK_CONFIG_VTUNE_DIR 00:06:23.962 #define SPDK_CONFIG_WERROR 1 00:06:23.962 #define SPDK_CONFIG_WPDK_DIR 00:06:23.962 #undef SPDK_CONFIG_XNVME 00:06:23.962 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:23.962 13:52:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:23.962 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:23.963 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2841677 ]] 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 2841677 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.sgBOiG 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.sgBOiG/tests/nvmf /tmp/spdk.sgBOiG 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=945618944 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4338810880 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=50228748288 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742551040 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11513802752 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866563072 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871273472 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342714368 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348510208 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5795840 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870765568 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871277568 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=512000 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174248960 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174253056 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:24.223 * Looking for test storage... 00:06:24.223 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=50228748288 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=13728395264 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:24.224 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:24.224 13:52:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:24.224 [2024-07-15 13:52:02.145272] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:24.224 [2024-07-15 13:52:02.145365] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841751 ] 00:06:24.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.482 [2024-07-15 13:52:02.366757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.482 [2024-07-15 13:52:02.438282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.482 [2024-07-15 13:52:02.498167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.482 [2024-07-15 13:52:02.514487] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:24.482 INFO: Running with entropic power schedule (0xFF, 100). 00:06:24.482 INFO: Seed: 1598830655 00:06:24.740 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:24.740 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:24.740 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:24.740 INFO: A corpus is not provided, starting from an empty corpus 00:06:24.740 #2 INITED exec/s: 0 rss: 64Mb 00:06:24.740 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:24.740 This may also happen if the target rejected all inputs we tried so far 00:06:24.740 [2024-07-15 13:52:02.579650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:24.740 [2024-07-15 13:52:02.579679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.998 NEW_FUNC[1/696]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:24.999 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:24.999 #17 NEW cov: 11863 ft: 11862 corp: 2/69b lim: 320 exec/s: 0 rss: 72Mb L: 68/68 MS: 5 InsertRepeatedBytes-ChangeBinInt-CrossOver-EraseBytes-CMP- DE: "\000\000\000\000\000\000\000\001"- 00:06:24.999 [2024-07-15 13:52:02.920771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.999 [2024-07-15 13:52:02.920832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.999 #19 NEW cov: 12019 ft: 12589 corp: 3/164b lim: 320 exec/s: 0 rss: 72Mb L: 95/95 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:24.999 [2024-07-15 13:52:02.970670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:a5a5a5a5 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.999 [2024-07-15 13:52:02.970701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.999 NEW_FUNC[1/1]: 0x17c03f0 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:24.999 #25 NEW cov: 12048 ft: 13216 corp: 4/289b lim: 320 exec/s: 0 rss: 72Mb L: 125/125 MS: 1 InsertRepeatedBytes- 00:06:24.999 [2024-07-15 13:52:03.010805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:a5a5a5a5 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.999 [2024-07-15 13:52:03.010830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.999 #26 NEW cov: 12133 ft: 13523 corp: 5/414b lim: 320 exec/s: 0 rss: 72Mb L: 125/125 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:06:25.257 [2024-07-15 13:52:03.070869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.257 [2024-07-15 13:52:03.070894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.257 #27 NEW cov: 12133 ft: 13638 corp: 6/509b lim: 320 exec/s: 0 rss: 72Mb L: 95/125 MS: 1 ChangeBinInt- 00:06:25.257 [2024-07-15 13:52:03.121022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.257 [2024-07-15 13:52:03.121047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.257 #28 NEW cov: 12133 ft: 13726 corp: 7/604b lim: 320 exec/s: 0 rss: 72Mb L: 95/125 MS: 1 ShuffleBytes- 00:06:25.257 [2024-07-15 13:52:03.171156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:25.257 [2024-07-15 13:52:03.171180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.257 #29 NEW cov: 12133 ft: 13755 corp: 8/672b lim: 320 exec/s: 0 rss: 72Mb L: 68/125 MS: 1 ChangeBit- 00:06:25.257 [2024-07-15 13:52:03.221323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:25.257 [2024-07-15 13:52:03.221347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.257 #35 NEW cov: 12133 ft: 13800 corp: 9/741b lim: 320 exec/s: 0 rss: 72Mb L: 69/125 MS: 1 InsertByte- 00:06:25.257 [2024-07-15 13:52:03.261561] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.257 [2024-07-15 13:52:03.261585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.257 [2024-07-15 13:52:03.261643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf7ffffff0affffff 00:06:25.257 [2024-07-15 13:52:03.261657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.257 #36 NEW cov: 12133 ft: 14011 corp: 10/891b lim: 320 exec/s: 0 rss: 72Mb L: 150/150 MS: 1 CrossOver- 00:06:25.257 [2024-07-15 13:52:03.301546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:25.257 [2024-07-15 13:52:03.301570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.515 #37 NEW cov: 12133 ft: 14107 corp: 11/964b lim: 320 exec/s: 0 rss: 73Mb L: 73/150 MS: 1 CMP- DE: "\377\377\377\013"- 00:06:25.515 [2024-07-15 13:52:03.351725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.515 [2024-07-15 13:52:03.351750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.515 #38 NEW cov: 12133 ft: 14136 corp: 12/1059b lim: 320 exec/s: 0 rss: 73Mb L: 95/150 MS: 1 PersAutoDict- DE: "\377\377\377\013"- 00:06:25.515 [2024-07-15 13:52:03.391801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.515 [2024-07-15 13:52:03.391825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.515 #39 NEW cov: 12133 ft: 14160 corp: 13/1154b lim: 320 exec/s: 0 rss: 73Mb L: 95/150 MS: 1 ChangeBit- 00:06:25.515 [2024-07-15 13:52:03.431914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.515 [2024-07-15 13:52:03.431940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.515 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:25.515 #40 NEW cov: 12156 ft: 14215 corp: 14/1249b lim: 320 exec/s: 0 rss: 73Mb L: 95/150 MS: 1 ChangeByte- 00:06:25.515 [2024-07-15 13:52:03.482075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:25.515 [2024-07-15 13:52:03.482101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.515 #41 NEW cov: 12156 ft: 14250 corp: 15/1318b lim: 320 exec/s: 0 rss: 73Mb L: 69/150 MS: 1 InsertByte- 00:06:25.515 [2024-07-15 13:52:03.532140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.515 [2024-07-15 13:52:03.532164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.515 #42 NEW cov: 12156 ft: 14304 corp: 16/1413b lim: 320 exec/s: 0 rss: 73Mb L: 95/150 MS: 1 ChangeBinInt- 00:06:25.515 [2024-07-15 13:52:03.572270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.515 [2024-07-15 13:52:03.572296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.774 #43 NEW cov: 12156 ft: 14374 corp: 17/1508b lim: 320 exec/s: 43 rss: 73Mb L: 95/150 MS: 1 ShuffleBytes- 00:06:25.774 [2024-07-15 13:52:03.622430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffdf1c 00:06:25.774 [2024-07-15 13:52:03.622455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.774 #44 NEW cov: 12156 ft: 14395 corp: 18/1578b lim: 320 exec/s: 44 rss: 73Mb L: 70/150 MS: 1 CMP- DE: "\001\034"- 00:06:25.774 [2024-07-15 13:52:03.662554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.774 [2024-07-15 13:52:03.662579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.774 #45 NEW cov: 12156 ft: 14409 corp: 19/1673b lim: 320 exec/s: 45 rss: 73Mb L: 95/150 MS: 1 ChangeBit- 00:06:25.774 [2024-07-15 13:52:03.702671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ff23ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:25.774 [2024-07-15 13:52:03.702696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.774 #46 NEW cov: 12156 ft: 14435 corp: 20/1743b lim: 320 exec/s: 46 rss: 73Mb L: 70/150 MS: 1 InsertByte- 00:06:25.774 [2024-07-15 13:52:03.742731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:25.774 [2024-07-15 13:52:03.742756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.774 #47 NEW cov: 12156 ft: 14453 corp: 21/1819b lim: 320 exec/s: 47 rss: 73Mb L: 76/150 MS: 1 InsertRepeatedBytes- 00:06:25.774 [2024-07-15 13:52:03.782946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:a5a5a5a5 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.774 [2024-07-15 13:52:03.782971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.774 #48 NEW cov: 12156 ft: 14505 corp: 22/1944b lim: 320 exec/s: 48 rss: 73Mb L: 125/150 MS: 1 ChangeBit- 00:06:25.774 [2024-07-15 13:52:03.832999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:25.774 [2024-07-15 13:52:03.833024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.033 #49 NEW cov: 12156 ft: 14558 corp: 23/2012b lim: 320 exec/s: 49 rss: 73Mb L: 68/150 MS: 1 PersAutoDict- DE: "\001\034"- 00:06:26.033 [2024-07-15 13:52:03.873101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffff0b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.033 [2024-07-15 13:52:03.873125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.033 #50 NEW cov: 12156 ft: 14652 corp: 24/2088b lim: 320 exec/s: 50 rss: 73Mb L: 76/150 MS: 1 PersAutoDict- DE: "\377\377\377\013"- 00:06:26.033 [2024-07-15 13:52:03.923266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffdf1c 00:06:26.033 [2024-07-15 13:52:03.923293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.033 #51 NEW cov: 12156 ft: 14660 corp: 25/2158b lim: 320 exec/s: 51 rss: 74Mb L: 70/150 MS: 1 ShuffleBytes- 00:06:26.033 [2024-07-15 13:52:03.973438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.033 [2024-07-15 13:52:03.973464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.033 #52 NEW cov: 12156 ft: 14667 corp: 26/2253b lim: 320 exec/s: 52 rss: 74Mb L: 95/150 MS: 1 ChangeBinInt- 00:06:26.033 [2024-07-15 13:52:04.023545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff0010 00:06:26.033 [2024-07-15 13:52:04.023570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.033 #53 NEW cov: 12156 ft: 14695 corp: 27/2322b lim: 320 exec/s: 53 rss: 74Mb L: 69/150 MS: 1 CMP- DE: "\020\000"- 00:06:26.033 [2024-07-15 13:52:04.073681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffff0bff cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.033 [2024-07-15 13:52:04.073705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.292 #54 NEW cov: 12156 ft: 14714 corp: 28/2399b lim: 320 exec/s: 54 rss: 74Mb L: 77/150 MS: 1 InsertByte- 00:06:26.292 [2024-07-15 13:52:04.133945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:a5a5a5a5 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.292 [2024-07-15 13:52:04.133973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.292 #60 NEW cov: 12156 ft: 14734 corp: 29/2524b lim: 320 exec/s: 60 rss: 74Mb L: 125/150 MS: 1 ChangeBinInt- 00:06:26.292 [2024-07-15 13:52:04.174037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:a5a5a5a5 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.292 [2024-07-15 13:52:04.174061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.292 #61 NEW cov: 12156 ft: 14751 corp: 30/2649b lim: 320 exec/s: 61 rss: 74Mb L: 125/150 MS: 1 CMP- DE: "7?\000\000"- 00:06:26.292 [2024-07-15 13:52:04.224166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.292 [2024-07-15 13:52:04.224190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.292 #62 NEW cov: 12156 ft: 14769 corp: 31/2744b lim: 320 exec/s: 62 rss: 74Mb L: 95/150 MS: 1 InsertRepeatedBytes- 00:06:26.292 [2024-07-15 13:52:04.264336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:f8f8f8f8 cdw11:f8f8f8f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf8f8f8f8f8ffffff 00:06:26.292 [2024-07-15 13:52:04.264360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.292 [2024-07-15 13:52:04.264434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f8) qid:0 cid:5 nsid:f8f8f8f8 cdw10:f8f8f8f8 cdw11:fff8f8f8 00:06:26.292 [2024-07-15 13:52:04.264448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.292 #63 NEW cov: 12157 ft: 14792 corp: 32/2904b lim: 320 exec/s: 63 rss: 74Mb L: 160/160 MS: 1 InsertRepeatedBytes- 00:06:26.292 [2024-07-15 13:52:04.304350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.292 [2024-07-15 13:52:04.304376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.292 #64 NEW cov: 12157 ft: 14801 corp: 33/3001b lim: 320 exec/s: 64 rss: 74Mb L: 97/160 MS: 1 CrossOver- 00:06:26.292 [2024-07-15 13:52:04.354640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.292 [2024-07-15 13:52:04.354664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.551 #65 NEW cov: 12157 ft: 14834 corp: 34/3088b lim: 320 exec/s: 65 rss: 74Mb L: 87/160 MS: 1 EraseBytes- 00:06:26.551 [2024-07-15 13:52:04.404560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:a5ffffff cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa5a5a5a5a5a5a5a5 00:06:26.551 [2024-07-15 13:52:04.404584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.551 #68 NEW cov: 12157 ft: 14846 corp: 35/3153b lim: 320 exec/s: 68 rss: 74Mb L: 65/160 MS: 3 CrossOver-InsertByte-CopyPart- 00:06:26.551 [2024-07-15 13:52:04.444755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.551 [2024-07-15 13:52:04.444779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.551 #69 NEW cov: 12157 ft: 14880 corp: 36/3252b lim: 320 exec/s: 69 rss: 74Mb L: 99/160 MS: 1 PersAutoDict- DE: "7?\000\000"- 00:06:26.551 [2024-07-15 13:52:04.484865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (0f) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.551 [2024-07-15 13:52:04.484892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.551 #70 NEW cov: 12157 ft: 14888 corp: 37/3348b lim: 320 exec/s: 70 rss: 74Mb L: 96/160 MS: 1 InsertByte- 00:06:26.551 [2024-07-15 13:52:04.524957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.551 [2024-07-15 13:52:04.524982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.551 #71 NEW cov: 12157 ft: 14900 corp: 38/3445b lim: 320 exec/s: 71 rss: 75Mb L: 97/160 MS: 1 CrossOver- 00:06:26.551 [2024-07-15 13:52:04.575135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.551 [2024-07-15 13:52:04.575159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.551 #72 NEW cov: 12157 ft: 14902 corp: 39/3538b lim: 320 exec/s: 36 rss: 75Mb L: 93/160 MS: 1 EraseBytes- 00:06:26.551 #72 DONE cov: 12157 ft: 14902 corp: 39/3538b lim: 320 exec/s: 36 rss: 75Mb 00:06:26.551 ###### Recommended dictionary. ###### 00:06:26.551 "\000\000\000\000\000\000\000\001" # Uses: 2 00:06:26.551 "\377\377\377\013" # Uses: 2 00:06:26.551 "\001\034" # Uses: 1 00:06:26.551 "\020\000" # Uses: 0 00:06:26.551 "7?\000\000" # Uses: 1 00:06:26.551 ###### End of recommended dictionary. ###### 00:06:26.551 Done 72 runs in 2 second(s) 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.810 13:52:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:26.810 [2024-07-15 13:52:04.792977] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:26.810 [2024-07-15 13:52:04.793048] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842084 ] 00:06:26.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.069 [2024-07-15 13:52:05.009235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.069 [2024-07-15 13:52:05.083199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.328 [2024-07-15 13:52:05.142938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.328 [2024-07-15 13:52:05.159244] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:27.328 INFO: Running with entropic power schedule (0xFF, 100). 00:06:27.328 INFO: Seed: 4240853856 00:06:27.328 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:27.328 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:27.328 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:27.328 INFO: A corpus is not provided, starting from an empty corpus 00:06:27.328 #2 INITED exec/s: 0 rss: 65Mb 00:06:27.328 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:27.328 This may also happen if the target rejected all inputs we tried so far 00:06:27.328 [2024-07-15 13:52:05.217502] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.328 [2024-07-15 13:52:05.217618] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.328 [2024-07-15 13:52:05.217722] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.328 [2024-07-15 13:52:05.217826] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.328 [2024-07-15 13:52:05.218033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.328 [2024-07-15 13:52:05.218062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.328 [2024-07-15 13:52:05.218117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.328 [2024-07-15 13:52:05.218131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.328 [2024-07-15 13:52:05.218182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.328 [2024-07-15 13:52:05.218196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.328 [2024-07-15 13:52:05.218246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.328 [2024-07-15 13:52:05.218260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.587 NEW_FUNC[1/696]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:27.587 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:27.587 #9 NEW cov: 11929 ft: 11928 corp: 2/30b lim: 30 exec/s: 0 rss: 71Mb L: 29/29 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:27.587 [2024-07-15 13:52:05.558537] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.587 [2024-07-15 13:52:05.558685] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.587 [2024-07-15 13:52:05.558796] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.587 [2024-07-15 13:52:05.558901] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.587 [2024-07-15 13:52:05.559120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.587 [2024-07-15 13:52:05.559179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.587 [2024-07-15 13:52:05.559266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.587 [2024-07-15 13:52:05.559294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.587 [2024-07-15 13:52:05.559371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.587 [2024-07-15 13:52:05.559396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.587 [2024-07-15 13:52:05.559473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.587 [2024-07-15 13:52:05.559498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.587 #10 NEW cov: 12059 ft: 12660 corp: 3/57b lim: 30 exec/s: 0 rss: 72Mb L: 27/29 MS: 1 EraseBytes- 00:06:27.587 [2024-07-15 13:52:05.618418] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.588 [2024-07-15 13:52:05.618547] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.588 [2024-07-15 13:52:05.618649] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.588 [2024-07-15 13:52:05.618746] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.588 [2024-07-15 13:52:05.618966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.588 [2024-07-15 13:52:05.618992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.588 [2024-07-15 13:52:05.619044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.588 [2024-07-15 13:52:05.619058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.588 [2024-07-15 13:52:05.619108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.588 [2024-07-15 13:52:05.619121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.588 [2024-07-15 13:52:05.619171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.588 [2024-07-15 13:52:05.619185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.588 #12 NEW cov: 12065 ft: 12933 corp: 4/86b lim: 30 exec/s: 0 rss: 72Mb L: 29/29 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:27.588 [2024-07-15 13:52:05.658580] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.588 [2024-07-15 13:52:05.658691] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.588 [2024-07-15 13:52:05.658792] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009b9b 00:06:27.847 [2024-07-15 13:52:05.658891] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:27.847 [2024-07-15 13:52:05.659088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.659114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.659165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.659184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.659241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b819b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.659254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.659306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.659320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.847 #13 NEW cov: 12150 ft: 13228 corp: 5/114b lim: 30 exec/s: 0 rss: 72Mb L: 28/29 MS: 1 InsertByte- 00:06:27.847 [2024-07-15 13:52:05.708603] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:27.847 [2024-07-15 13:52:05.708727] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:27.847 [2024-07-15 13:52:05.708924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:29ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.708949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.709002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.709015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.847 #15 NEW cov: 12150 ft: 13819 corp: 6/129b lim: 30 exec/s: 0 rss: 72Mb L: 15/29 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:27.847 [2024-07-15 13:52:05.748751] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.847 [2024-07-15 13:52:05.748876] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.847 [2024-07-15 13:52:05.748979] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.847 [2024-07-15 13:52:05.749078] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:27.847 [2024-07-15 13:52:05.749278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.749303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.749366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.749379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.749430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.749444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.749493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.749506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.847 #16 NEW cov: 12150 ft: 13889 corp: 7/153b lim: 30 exec/s: 0 rss: 72Mb L: 24/29 MS: 1 EraseBytes- 00:06:27.847 [2024-07-15 13:52:05.798971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.798998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.847 #18 NEW cov: 12182 ft: 14377 corp: 8/161b lim: 30 exec/s: 0 rss: 72Mb L: 8/29 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:27.847 [2024-07-15 13:52:05.838959] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:27.847 [2024-07-15 13:52:05.839083] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:27.847 [2024-07-15 13:52:05.839295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2b2983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.839319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.847 [2024-07-15 13:52:05.839371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.839384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.847 #19 NEW cov: 12182 ft: 14414 corp: 9/177b lim: 30 exec/s: 0 rss: 72Mb L: 16/29 MS: 1 InsertByte- 00:06:27.847 [2024-07-15 13:52:05.889202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.847 [2024-07-15 13:52:05.889231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.105 #20 NEW cov: 12182 ft: 14426 corp: 10/185b lim: 30 exec/s: 0 rss: 72Mb L: 8/29 MS: 1 ChangeByte- 00:06:28.105 [2024-07-15 13:52:05.939323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.105 [2024-07-15 13:52:05.939347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.105 #21 NEW cov: 12182 ft: 14519 corp: 11/193b lim: 30 exec/s: 0 rss: 72Mb L: 8/29 MS: 1 ChangeBinInt- 00:06:28.105 [2024-07-15 13:52:05.979431] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.105 [2024-07-15 13:52:05.979562] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.105 [2024-07-15 13:52:05.979667] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.105 [2024-07-15 13:52:05.979770] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.105 [2024-07-15 13:52:05.979874] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a0a 00:06:28.105 [2024-07-15 13:52:05.980072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.105 [2024-07-15 13:52:05.980097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.105 [2024-07-15 13:52:05.980151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.105 [2024-07-15 13:52:05.980165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.105 [2024-07-15 13:52:05.980222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.105 [2024-07-15 13:52:05.980236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.105 [2024-07-15 13:52:05.980290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.105 [2024-07-15 13:52:05.980308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.105 [2024-07-15 13:52:05.980359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:9b9b029b cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.105 [2024-07-15 13:52:05.980374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.105 #22 NEW cov: 12182 ft: 14615 corp: 12/223b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertByte- 00:06:28.105 [2024-07-15 13:52:06.019498] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.105 [2024-07-15 13:52:06.019619] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.106 [2024-07-15 13:52:06.019721] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.106 [2024-07-15 13:52:06.019819] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.106 [2024-07-15 13:52:06.020010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.020035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.020087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63008318 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.020101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.020153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.020167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.020224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.020239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.106 #23 NEW cov: 12182 ft: 14632 corp: 13/247b lim: 30 exec/s: 0 rss: 72Mb L: 24/30 MS: 1 ChangeBinInt- 00:06:28.106 [2024-07-15 13:52:06.069623] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.106 [2024-07-15 13:52:06.069735] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (888208) > buf size (4096) 00:06:28.106 [2024-07-15 13:52:06.069836] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.106 [2024-07-15 13:52:06.069937] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.106 [2024-07-15 13:52:06.070135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.070161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.070214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.070233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.070284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.070297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.070349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.070366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.106 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:28.106 #24 NEW cov: 12213 ft: 14719 corp: 14/274b lim: 30 exec/s: 0 rss: 72Mb L: 27/30 MS: 1 InsertRepeatedBytes- 00:06:28.106 [2024-07-15 13:52:06.119689] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.106 [2024-07-15 13:52:06.119799] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.106 [2024-07-15 13:52:06.119990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:db2983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.120014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.120066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.120079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.106 #26 NEW cov: 12213 ft: 14732 corp: 15/287b lim: 30 exec/s: 0 rss: 72Mb L: 13/30 MS: 2 ChangeByte-CrossOver- 00:06:28.106 [2024-07-15 13:52:06.159854] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.106 [2024-07-15 13:52:06.159965] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.106 [2024-07-15 13:52:06.160063] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.106 [2024-07-15 13:52:06.160161] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.106 [2024-07-15 13:52:06.160377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.160403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.160457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.160471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.160523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.160536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.106 [2024-07-15 13:52:06.160590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.106 [2024-07-15 13:52:06.160604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.364 #27 NEW cov: 12213 ft: 14763 corp: 16/314b lim: 30 exec/s: 0 rss: 72Mb L: 27/30 MS: 1 CopyPart- 00:06:28.364 [2024-07-15 13:52:06.200069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.200096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.364 #28 NEW cov: 12213 ft: 14804 corp: 17/322b lim: 30 exec/s: 28 rss: 73Mb L: 8/30 MS: 1 ShuffleBytes- 00:06:28.364 [2024-07-15 13:52:06.250102] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.364 [2024-07-15 13:52:06.250237] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff63 00:06:28.364 [2024-07-15 13:52:06.250343] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.364 [2024-07-15 13:52:06.250448] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.364 [2024-07-15 13:52:06.250652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.250678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.364 [2024-07-15 13:52:06.250731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:638a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.250745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.364 [2024-07-15 13:52:06.250798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.250811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.364 [2024-07-15 13:52:06.250863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.250876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.364 #29 NEW cov: 12213 ft: 14818 corp: 18/349b lim: 30 exec/s: 29 rss: 73Mb L: 27/30 MS: 1 CopyPart- 00:06:28.364 [2024-07-15 13:52:06.300148] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:06:28.364 [2024-07-15 13:52:06.300381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.300404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.364 #30 NEW cov: 12213 ft: 14860 corp: 19/360b lim: 30 exec/s: 30 rss: 73Mb L: 11/30 MS: 1 InsertRepeatedBytes- 00:06:28.364 [2024-07-15 13:52:06.350305] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.364 [2024-07-15 13:52:06.350511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aff838a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.350536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.364 #31 NEW cov: 12213 ft: 14893 corp: 20/368b lim: 30 exec/s: 31 rss: 73Mb L: 8/30 MS: 1 CrossOver- 00:06:28.364 [2024-07-15 13:52:06.390530] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.364 [2024-07-15 13:52:06.390665] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.364 [2024-07-15 13:52:06.390783] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009b9b 00:06:28.364 [2024-07-15 13:52:06.390881] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.364 [2024-07-15 13:52:06.391073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.391099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.364 [2024-07-15 13:52:06.391151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.391166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.364 [2024-07-15 13:52:06.391222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b819b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.391239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.364 [2024-07-15 13:52:06.391292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.364 [2024-07-15 13:52:06.391306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.364 #32 NEW cov: 12213 ft: 14981 corp: 21/397b lim: 30 exec/s: 32 rss: 73Mb L: 29/30 MS: 1 InsertByte- 00:06:28.621 [2024-07-15 13:52:06.440637] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.621 [2024-07-15 13:52:06.440752] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x59 00:06:28.621 [2024-07-15 13:52:06.440956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:002f8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.621 [2024-07-15 13:52:06.440982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.621 [2024-07-15 13:52:06.441035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.441050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.622 #33 NEW cov: 12213 ft: 15013 corp: 22/409b lim: 30 exec/s: 33 rss: 73Mb L: 12/30 MS: 1 InsertByte- 00:06:28.622 [2024-07-15 13:52:06.490775] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.622 [2024-07-15 13:52:06.490887] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (888208) > buf size (4096) 00:06:28.622 [2024-07-15 13:52:06.490988] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.622 [2024-07-15 13:52:06.491091] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.622 [2024-07-15 13:52:06.491319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.491344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.491395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.491410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.491466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00188363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.491479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.491531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.491546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.622 #34 NEW cov: 12213 ft: 15069 corp: 23/438b lim: 30 exec/s: 34 rss: 73Mb L: 29/30 MS: 1 CrossOver- 00:06:28.622 [2024-07-15 13:52:06.530948] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.622 [2024-07-15 13:52:06.531062] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.622 [2024-07-15 13:52:06.531164] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.622 [2024-07-15 13:52:06.531273] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd9b 00:06:28.622 [2024-07-15 13:52:06.531376] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a0a 00:06:28.622 [2024-07-15 13:52:06.531585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.531610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.531664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.531678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.531730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.531744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.531795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.531819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.531886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:9b9b029b cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.531900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.622 #35 NEW cov: 12213 ft: 15080 corp: 24/468b lim: 30 exec/s: 35 rss: 73Mb L: 30/30 MS: 1 CMP- DE: "\001\000\000\015"- 00:06:28.622 [2024-07-15 13:52:06.580982] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.622 [2024-07-15 13:52:06.581092] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.622 [2024-07-15 13:52:06.581290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2b2983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.581315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.581367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.581381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.622 #36 NEW cov: 12213 ft: 15097 corp: 25/484b lim: 30 exec/s: 36 rss: 73Mb L: 16/30 MS: 1 CrossOver- 00:06:28.622 [2024-07-15 13:52:06.631176] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.622 [2024-07-15 13:52:06.631307] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.622 [2024-07-15 13:52:06.631413] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.622 [2024-07-15 13:52:06.631512] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.622 [2024-07-15 13:52:06.631708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.631733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.631788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.631802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.631856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.631870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.631923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.631936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.622 #37 NEW cov: 12213 ft: 15102 corp: 26/509b lim: 30 exec/s: 37 rss: 73Mb L: 25/30 MS: 1 CopyPart- 00:06:28.622 [2024-07-15 13:52:06.671276] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.622 [2024-07-15 13:52:06.671401] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.622 [2024-07-15 13:52:06.671503] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.622 [2024-07-15 13:52:06.671600] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.622 [2024-07-15 13:52:06.671804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2b2983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.671829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.671880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.671894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.671945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.671958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.622 [2024-07-15 13:52:06.672010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.622 [2024-07-15 13:52:06.672023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.880 #38 NEW cov: 12213 ft: 15144 corp: 27/537b lim: 30 exec/s: 38 rss: 73Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:06:28.880 [2024-07-15 13:52:06.711375] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.711484] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.711587] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.711688] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.711891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.711916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.711971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.711984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.712037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.712053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.712105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.712118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.880 #39 NEW cov: 12213 ft: 15164 corp: 28/561b lim: 30 exec/s: 39 rss: 73Mb L: 24/30 MS: 1 EraseBytes- 00:06:28.880 [2024-07-15 13:52:06.761494] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.880 [2024-07-15 13:52:06.761604] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.880 [2024-07-15 13:52:06.761800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.761825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.761879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.761892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.880 #40 NEW cov: 12213 ft: 15186 corp: 29/578b lim: 30 exec/s: 40 rss: 73Mb L: 17/30 MS: 1 CrossOver- 00:06:28.880 [2024-07-15 13:52:06.811681] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.811790] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.811891] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.811990] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:28.880 [2024-07-15 13:52:06.812187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.812211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.812268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.812281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.812336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.812349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.812398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:23638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.812411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.880 #41 NEW cov: 12213 ft: 15193 corp: 30/602b lim: 30 exec/s: 41 rss: 73Mb L: 24/30 MS: 1 ChangeBit- 00:06:28.880 [2024-07-15 13:52:06.861959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.861984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.880 #42 NEW cov: 12213 ft: 15202 corp: 31/610b lim: 30 exec/s: 42 rss: 74Mb L: 8/30 MS: 1 ShuffleBytes- 00:06:28.880 [2024-07-15 13:52:06.901864] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.880 [2024-07-15 13:52:06.901982] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.880 [2024-07-15 13:52:06.902081] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:28.880 [2024-07-15 13:52:06.902282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.902307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.902360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.902374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.902424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.902438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.880 #43 NEW cov: 12213 ft: 15419 corp: 32/628b lim: 30 exec/s: 43 rss: 74Mb L: 18/30 MS: 1 CrossOver- 00:06:28.880 [2024-07-15 13:52:06.941972] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.880 [2024-07-15 13:52:06.942085] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.880 [2024-07-15 13:52:06.942187] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:28.880 [2024-07-15 13:52:06.942390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2b2983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.942415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.942469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.942483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.880 [2024-07-15 13:52:06.942534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.880 [2024-07-15 13:52:06.942547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.138 #44 NEW cov: 12213 ft: 15436 corp: 33/648b lim: 30 exec/s: 44 rss: 74Mb L: 20/30 MS: 1 CopyPart- 00:06:29.138 [2024-07-15 13:52:06.982185] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:29.138 [2024-07-15 13:52:06.982322] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:06.982429] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:06.982533] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:06.982634] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:06.982839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:06.982864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:06.982921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:06.982934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:06.982989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63008318 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:06.983003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:06.983053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:06.983067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:06.983118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:06.983132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.138 #45 NEW cov: 12213 ft: 15444 corp: 34/678b lim: 30 exec/s: 45 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:29.138 [2024-07-15 13:52:07.022247] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:07.022373] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:07.022478] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:07.022579] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:07.022778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.022804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.022859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.022873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.022926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.022940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.022994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.023007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.138 #46 NEW cov: 12213 ft: 15518 corp: 35/703b lim: 30 exec/s: 46 rss: 74Mb L: 25/30 MS: 1 ShuffleBytes- 00:06:29.138 [2024-07-15 13:52:07.062361] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:07.062487] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (888208) > buf size (4096) 00:06:29.138 [2024-07-15 13:52:07.062589] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006363 00:06:29.138 [2024-07-15 13:52:07.062688] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1b63 00:06:29.138 [2024-07-15 13:52:07.062902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.062927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.062981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.062996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.063051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:63638363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.063065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.063117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:63630063 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.063131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.138 #47 NEW cov: 12213 ft: 15529 corp: 36/730b lim: 30 exec/s: 47 rss: 74Mb L: 27/30 MS: 1 ChangeBinInt- 00:06:29.138 [2024-07-15 13:52:07.102524] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:29.138 [2024-07-15 13:52:07.102636] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:29.138 [2024-07-15 13:52:07.102735] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009b9b 00:06:29.138 [2024-07-15 13:52:07.102832] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:29.138 [2024-07-15 13:52:07.102935] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002f0a 00:06:29.138 [2024-07-15 13:52:07.103150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.103175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.103230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.103260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.103312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b819b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.103325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.103378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.103391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.138 [2024-07-15 13:52:07.103443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:399b029b cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.103456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.138 #48 NEW cov: 12213 ft: 15563 corp: 37/760b lim: 30 exec/s: 48 rss: 74Mb L: 30/30 MS: 1 InsertByte- 00:06:29.138 [2024-07-15 13:52:07.142493] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:29.138 [2024-07-15 13:52:07.142701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2b2983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.138 [2024-07-15 13:52:07.142724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.138 #49 NEW cov: 12213 ft: 15565 corp: 38/768b lim: 30 exec/s: 49 rss: 74Mb L: 8/30 MS: 1 EraseBytes- 00:06:29.139 [2024-07-15 13:52:07.192710] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:29.139 [2024-07-15 13:52:07.192823] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:29.139 [2024-07-15 13:52:07.192927] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:29.139 [2024-07-15 13:52:07.193034] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:06:29.139 [2024-07-15 13:52:07.193254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.139 [2024-07-15 13:52:07.193280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.139 [2024-07-15 13:52:07.193333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.139 [2024-07-15 13:52:07.193347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.139 [2024-07-15 13:52:07.193397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.139 [2024-07-15 13:52:07.193411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.139 [2024-07-15 13:52:07.193462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.139 [2024-07-15 13:52:07.193475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.397 #50 NEW cov: 12213 ft: 15575 corp: 39/795b lim: 30 exec/s: 25 rss: 74Mb L: 27/30 MS: 1 ShuffleBytes- 00:06:29.397 #50 DONE cov: 12213 ft: 15575 corp: 39/795b lim: 30 exec/s: 25 rss: 74Mb 00:06:29.397 ###### Recommended dictionary. ###### 00:06:29.397 "\001\000\000\015" # Uses: 0 00:06:29.397 ###### End of recommended dictionary. ###### 00:06:29.397 Done 50 runs in 2 second(s) 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:29.397 13:52:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:29.397 [2024-07-15 13:52:07.393336] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:29.397 [2024-07-15 13:52:07.393407] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842458 ] 00:06:29.397 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.656 [2024-07-15 13:52:07.602125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.656 [2024-07-15 13:52:07.673523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.913 [2024-07-15 13:52:07.733335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.913 [2024-07-15 13:52:07.749618] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:29.913 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.913 INFO: Seed: 2538867721 00:06:29.913 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:29.913 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:29.913 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:29.913 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.913 #2 INITED exec/s: 0 rss: 65Mb 00:06:29.913 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.913 This may also happen if the target rejected all inputs we tried so far 00:06:29.913 [2024-07-15 13:52:07.826825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.913 [2024-07-15 13:52:07.826870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.190 NEW_FUNC[1/695]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:30.190 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:30.190 #5 NEW cov: 11882 ft: 11881 corp: 2/14b lim: 35 exec/s: 0 rss: 72Mb L: 13/13 MS: 3 InsertRepeatedBytes-EraseBytes-CMP- DE: "\001\000\000\000\000\000\0009"- 00:06:30.190 [2024-07-15 13:52:08.177348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.190 [2024-07-15 13:52:08.177391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.190 #10 NEW cov: 12015 ft: 12458 corp: 3/23b lim: 35 exec/s: 0 rss: 72Mb L: 9/13 MS: 5 InsertByte-EraseBytes-ShuffleBytes-ChangeBit-PersAutoDict- DE: "\001\000\000\000\000\000\0009"- 00:06:30.190 #15 NEW cov: 12021 ft: 13112 corp: 4/33b lim: 35 exec/s: 0 rss: 72Mb L: 10/13 MS: 5 ChangeByte-InsertByte-ChangeBinInt-CopyPart-PersAutoDict- DE: "\001\000\000\000\000\000\0009"- 00:06:30.448 [2024-07-15 13:52:08.277262] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.449 [2024-07-15 13:52:08.277727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.277766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.449 #16 NEW cov: 12115 ft: 13413 corp: 5/42b lim: 35 exec/s: 0 rss: 72Mb L: 9/13 MS: 1 ShuffleBytes- 00:06:30.449 [2024-07-15 13:52:08.337803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:40000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.337833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.449 #17 NEW cov: 12115 ft: 13673 corp: 6/55b lim: 35 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeBit- 00:06:30.449 [2024-07-15 13:52:08.398364] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.449 [2024-07-15 13:52:08.398806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:cc000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.398832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.449 [2024-07-15 13:52:08.398919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.398935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.449 [2024-07-15 13:52:08.399026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.399042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.449 [2024-07-15 13:52:08.399127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.399145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.449 #18 NEW cov: 12115 ft: 14346 corp: 7/83b lim: 35 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:30.449 [2024-07-15 13:52:08.458261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.458290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.449 #19 NEW cov: 12115 ft: 14410 corp: 8/92b lim: 35 exec/s: 0 rss: 72Mb L: 9/28 MS: 1 ChangeBit- 00:06:30.449 [2024-07-15 13:52:08.508972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d80008 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.508998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.449 [2024-07-15 13:52:08.509099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.509116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.449 [2024-07-15 13:52:08.509207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d8d800d8 cdw11:0100d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.449 [2024-07-15 13:52:08.509224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.707 #20 NEW cov: 12115 ft: 14627 corp: 9/119b lim: 35 exec/s: 0 rss: 72Mb L: 27/28 MS: 1 InsertRepeatedBytes- 00:06:30.707 [2024-07-15 13:52:08.568957] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.707 [2024-07-15 13:52:08.569421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:cc000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.569448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.707 [2024-07-15 13:52:08.569538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.569555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.707 [2024-07-15 13:52:08.569644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.569663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.707 [2024-07-15 13:52:08.569747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00cc0000 cdw11:4000cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.569766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.707 #21 NEW cov: 12115 ft: 14755 corp: 10/153b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CopyPart- 00:06:30.707 [2024-07-15 13:52:08.628631] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.707 [2024-07-15 13:52:08.629074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.629102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.707 [2024-07-15 13:52:08.629193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.629214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.707 #22 NEW cov: 12115 ft: 14952 corp: 11/170b lim: 35 exec/s: 0 rss: 72Mb L: 17/34 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:30.707 [2024-07-15 13:52:08.678943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.678969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.707 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:30.707 #23 NEW cov: 12138 ft: 14992 corp: 12/183b lim: 35 exec/s: 0 rss: 72Mb L: 13/34 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:06:30.707 [2024-07-15 13:52:08.729163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01470008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.707 [2024-07-15 13:52:08.729189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.707 #24 NEW cov: 12138 ft: 15037 corp: 13/193b lim: 35 exec/s: 0 rss: 72Mb L: 10/34 MS: 1 InsertByte- 00:06:30.965 [2024-07-15 13:52:08.779327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.779354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.965 #27 NEW cov: 12138 ft: 15058 corp: 14/202b lim: 35 exec/s: 27 rss: 72Mb L: 9/34 MS: 3 EraseBytes-InsertByte-CopyPart- 00:06:30.965 [2024-07-15 13:52:08.829894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.965 [2024-07-15 13:52:08.830379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:cc000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.830406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.965 [2024-07-15 13:52:08.830494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.830510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.965 [2024-07-15 13:52:08.830598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.830614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.965 [2024-07-15 13:52:08.830706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.830725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.965 #28 NEW cov: 12138 ft: 15096 corp: 15/230b lim: 35 exec/s: 28 rss: 72Mb L: 28/34 MS: 1 ChangeByte- 00:06:30.965 [2024-07-15 13:52:08.879721] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.965 [2024-07-15 13:52:08.880207] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.965 [2024-07-15 13:52:08.880658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.880687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.965 [2024-07-15 13:52:08.880771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00cc0000 cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.880790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.965 [2024-07-15 13:52:08.880878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.880893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.965 [2024-07-15 13:52:08.880974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.880993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.965 #29 NEW cov: 12138 ft: 15133 corp: 16/258b lim: 35 exec/s: 29 rss: 72Mb L: 28/34 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:30.965 [2024-07-15 13:52:08.929549] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:30.965 [2024-07-15 13:52:08.930009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:47080000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.965 [2024-07-15 13:52:08.930038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.965 #35 NEW cov: 12138 ft: 15151 corp: 17/268b lim: 35 exec/s: 35 rss: 72Mb L: 10/34 MS: 1 ShuffleBytes- 00:06:30.965 #36 NEW cov: 12138 ft: 15205 corp: 18/278b lim: 35 exec/s: 36 rss: 73Mb L: 10/34 MS: 1 ChangeASCIIInt- 00:06:31.223 [2024-07-15 13:52:09.050428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00b50008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.223 [2024-07-15 13:52:09.050456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 #37 NEW cov: 12138 ft: 15240 corp: 19/288b lim: 35 exec/s: 37 rss: 73Mb L: 10/34 MS: 1 InsertByte- 00:06:31.223 [2024-07-15 13:52:09.110320] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.223 [2024-07-15 13:52:09.110752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:47080000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.223 [2024-07-15 13:52:09.110784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 #38 NEW cov: 12138 ft: 15286 corp: 20/299b lim: 35 exec/s: 38 rss: 73Mb L: 11/34 MS: 1 InsertByte- 00:06:31.223 [2024-07-15 13:52:09.180933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00b50008 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.223 [2024-07-15 13:52:09.180967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 #39 NEW cov: 12138 ft: 15303 corp: 21/309b lim: 35 exec/s: 39 rss: 73Mb L: 10/34 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:31.223 [2024-07-15 13:52:09.241562] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.223 [2024-07-15 13:52:09.242018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.223 [2024-07-15 13:52:09.242048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.223 [2024-07-15 13:52:09.242129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00cc00cc cdw11:cc000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.223 [2024-07-15 13:52:09.242146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.223 [2024-07-15 13:52:09.242232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.223 [2024-07-15 13:52:09.242247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.223 [2024-07-15 13:52:09.242338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.223 [2024-07-15 13:52:09.242356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.223 #40 NEW cov: 12138 ft: 15309 corp: 22/337b lim: 35 exec/s: 40 rss: 73Mb L: 28/34 MS: 1 ShuffleBytes- 00:06:31.482 [2024-07-15 13:52:09.301871] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.482 [2024-07-15 13:52:09.302332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.302363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.302455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000040 cdw11:39000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.302472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.302560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00ff cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.302576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.302667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.302686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.482 #41 NEW cov: 12138 ft: 15319 corp: 23/365b lim: 35 exec/s: 41 rss: 73Mb L: 28/34 MS: 1 CrossOver- 00:06:31.482 [2024-07-15 13:52:09.372131] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.482 [2024-07-15 13:52:09.372632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.372663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.372751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00cc00cc cdw11:cc000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.372770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.372863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.372879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.372965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:40000108 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.372983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.482 #42 NEW cov: 12138 ft: 15321 corp: 24/399b lim: 35 exec/s: 42 rss: 73Mb L: 34/34 MS: 1 CrossOver- 00:06:31.482 [2024-07-15 13:52:09.422251] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.482 [2024-07-15 13:52:09.422697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:3300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.422726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.422807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.422823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.422911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cccc00cc cdw11:cc00cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.422926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.423022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00cc0000 cdw11:4000cccc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.423041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.482 #43 NEW cov: 12138 ft: 15337 corp: 25/433b lim: 35 exec/s: 43 rss: 73Mb L: 34/34 MS: 1 ChangeBinInt- 00:06:31.482 [2024-07-15 13:52:09.482291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d80008 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.482328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.482 [2024-07-15 13:52:09.482411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:0000d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.482428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.482 #44 NEW cov: 12138 ft: 15367 corp: 26/447b lim: 35 exec/s: 44 rss: 73Mb L: 14/34 MS: 1 EraseBytes- 00:06:31.482 [2024-07-15 13:52:09.542173] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.482 [2024-07-15 13:52:09.542733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.482 [2024-07-15 13:52:09.542763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.741 #45 NEW cov: 12138 ft: 15393 corp: 27/465b lim: 35 exec/s: 45 rss: 73Mb L: 18/34 MS: 1 InsertRepeatedBytes- 00:06:31.741 [2024-07-15 13:52:09.602385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3d000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.741 [2024-07-15 13:52:09.602414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.741 #46 NEW cov: 12138 ft: 15413 corp: 28/474b lim: 35 exec/s: 46 rss: 73Mb L: 9/34 MS: 1 ChangeByte- 00:06:31.741 [2024-07-15 13:52:09.652584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.741 [2024-07-15 13:52:09.652611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.741 #47 NEW cov: 12138 ft: 15421 corp: 29/483b lim: 35 exec/s: 47 rss: 73Mb L: 9/34 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:31.741 [2024-07-15 13:52:09.712706] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.741 [2024-07-15 13:52:09.713144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.741 [2024-07-15 13:52:09.713173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.741 [2024-07-15 13:52:09.713262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b9000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.741 [2024-07-15 13:52:09.713281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.741 #48 NEW cov: 12138 ft: 15520 corp: 30/497b lim: 35 exec/s: 48 rss: 74Mb L: 14/34 MS: 1 InsertByte- 00:06:31.741 [2024-07-15 13:52:09.773096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.741 [2024-07-15 13:52:09.773124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.741 #49 NEW cov: 12138 ft: 15530 corp: 31/509b lim: 35 exec/s: 49 rss: 74Mb L: 12/34 MS: 1 CrossOver- 00:06:32.001 [2024-07-15 13:52:09.822890] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.001 [2024-07-15 13:52:09.823413] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.001 [2024-07-15 13:52:09.823877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.001 [2024-07-15 13:52:09.823909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.001 [2024-07-15 13:52:09.823997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:40000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.001 [2024-07-15 13:52:09.824014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.001 [2024-07-15 13:52:09.824102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff003900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.001 [2024-07-15 13:52:09.824120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.001 #50 NEW cov: 12138 ft: 15535 corp: 32/533b lim: 35 exec/s: 25 rss: 74Mb L: 24/34 MS: 1 CrossOver- 00:06:32.001 #50 DONE cov: 12138 ft: 15535 corp: 32/533b lim: 35 exec/s: 25 rss: 74Mb 00:06:32.001 ###### Recommended dictionary. ###### 00:06:32.001 "\001\000\000\000\000\000\0009" # Uses: 2 00:06:32.001 "\001\000\000\000" # Uses: 3 00:06:32.001 "\002\000\000\000\000\000\000\000" # Uses: 0 00:06:32.001 ###### End of recommended dictionary. ###### 00:06:32.001 Done 50 runs in 2 second(s) 00:06:32.001 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:32.001 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:32.001 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:32.001 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:32.002 13:52:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:32.002 13:52:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:32.002 13:52:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:32.002 13:52:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:32.002 [2024-07-15 13:52:10.033518] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:32.002 [2024-07-15 13:52:10.033595] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842823 ] 00:06:32.002 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.569 [2024-07-15 13:52:10.337940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.570 [2024-07-15 13:52:10.429603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.570 [2024-07-15 13:52:10.490056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.570 [2024-07-15 13:52:10.506396] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:32.570 INFO: Running with entropic power schedule (0xFF, 100). 00:06:32.570 INFO: Seed: 999920208 00:06:32.570 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:32.570 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:32.570 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:32.570 INFO: A corpus is not provided, starting from an empty corpus 00:06:32.570 #2 INITED exec/s: 0 rss: 65Mb 00:06:32.570 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:32.570 This may also happen if the target rejected all inputs we tried so far 00:06:33.137 NEW_FUNC[1/684]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:33.137 NEW_FUNC[2/684]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.137 #9 NEW cov: 11799 ft: 11794 corp: 2/14b lim: 20 exec/s: 0 rss: 72Mb L: 13/13 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:33.137 [2024-07-15 13:52:10.922779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.137 [2024-07-15 13:52:10.922841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.137 NEW_FUNC[1/17]: 0x11db1b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3359 00:06:33.137 NEW_FUNC[2/17]: 0x11dbd30 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3301 00:06:33.137 #13 NEW cov: 12173 ft: 12921 corp: 3/23b lim: 20 exec/s: 0 rss: 72Mb L: 9/13 MS: 4 ChangeBinInt-ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:33.137 #14 NEW cov: 12179 ft: 13243 corp: 4/36b lim: 20 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeByte- 00:06:33.137 NEW_FUNC[1/2]: 0x1341ad0 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:777 00:06:33.137 NEW_FUNC[2/2]: 0x1363700 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3531 00:06:33.137 #15 NEW cov: 12319 ft: 13556 corp: 5/50b lim: 20 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:06:33.137 #16 NEW cov: 12319 ft: 13758 corp: 6/59b lim: 20 exec/s: 0 rss: 72Mb L: 9/14 MS: 1 EraseBytes- 00:06:33.137 #17 NEW cov: 12319 ft: 13838 corp: 7/67b lim: 20 exec/s: 0 rss: 72Mb L: 8/14 MS: 1 EraseBytes- 00:06:33.137 [2024-07-15 13:52:11.183368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.137 [2024-07-15 13:52:11.183404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.137 #18 NEW cov: 12319 ft: 13887 corp: 8/76b lim: 20 exec/s: 0 rss: 72Mb L: 9/14 MS: 1 ChangeByte- 00:06:33.396 #19 NEW cov: 12319 ft: 13968 corp: 9/85b lim: 20 exec/s: 0 rss: 72Mb L: 9/14 MS: 1 ChangeBit- 00:06:33.396 [2024-07-15 13:52:11.273784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.396 [2024-07-15 13:52:11.273817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.396 #20 NEW cov: 12336 ft: 14189 corp: 10/102b lim: 20 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 CMP- DE: "\377\022\021\244\024zk\324"- 00:06:33.396 #21 NEW cov: 12336 ft: 14274 corp: 11/110b lim: 20 exec/s: 0 rss: 72Mb L: 8/17 MS: 1 PersAutoDict- DE: "\377\022\021\244\024zk\324"- 00:06:33.396 [2024-07-15 13:52:11.363844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.396 [2024-07-15 13:52:11.363872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.396 #22 NEW cov: 12336 ft: 14305 corp: 12/120b lim: 20 exec/s: 0 rss: 73Mb L: 10/17 MS: 1 EraseBytes- 00:06:33.396 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:33.396 #23 NEW cov: 12359 ft: 14349 corp: 13/134b lim: 20 exec/s: 0 rss: 73Mb L: 14/17 MS: 1 ChangeByte- 00:06:33.655 #24 NEW cov: 12359 ft: 14419 corp: 14/143b lim: 20 exec/s: 0 rss: 73Mb L: 9/17 MS: 1 PersAutoDict- DE: "\377\022\021\244\024zk\324"- 00:06:33.655 #25 NEW cov: 12359 ft: 14441 corp: 15/152b lim: 20 exec/s: 0 rss: 73Mb L: 9/17 MS: 1 InsertByte- 00:06:33.655 #26 NEW cov: 12359 ft: 14456 corp: 16/166b lim: 20 exec/s: 26 rss: 73Mb L: 14/17 MS: 1 CopyPart- 00:06:33.655 #27 NEW cov: 12359 ft: 14482 corp: 17/183b lim: 20 exec/s: 27 rss: 73Mb L: 17/17 MS: 1 CrossOver- 00:06:33.655 [2024-07-15 13:52:11.614532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.655 [2024-07-15 13:52:11.614561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.655 #28 NEW cov: 12359 ft: 14546 corp: 18/193b lim: 20 exec/s: 28 rss: 73Mb L: 10/17 MS: 1 CopyPart- 00:06:33.655 #29 NEW cov: 12359 ft: 14618 corp: 19/203b lim: 20 exec/s: 29 rss: 73Mb L: 10/17 MS: 1 ChangeBit- 00:06:33.914 #30 NEW cov: 12359 ft: 14844 corp: 20/210b lim: 20 exec/s: 30 rss: 73Mb L: 7/17 MS: 1 EraseBytes- 00:06:33.914 #31 NEW cov: 12359 ft: 14963 corp: 21/224b lim: 20 exec/s: 31 rss: 73Mb L: 14/17 MS: 1 CopyPart- 00:06:33.914 #32 NEW cov: 12359 ft: 15008 corp: 22/233b lim: 20 exec/s: 32 rss: 73Mb L: 9/17 MS: 1 ChangeByte- 00:06:33.914 #33 NEW cov: 12359 ft: 15039 corp: 23/243b lim: 20 exec/s: 33 rss: 73Mb L: 10/17 MS: 1 InsertByte- 00:06:33.914 #34 NEW cov: 12359 ft: 15080 corp: 24/263b lim: 20 exec/s: 34 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:33.914 #35 NEW cov: 12359 ft: 15108 corp: 25/272b lim: 20 exec/s: 35 rss: 73Mb L: 9/20 MS: 1 ChangeByte- 00:06:34.174 #36 NEW cov: 12359 ft: 15178 corp: 26/286b lim: 20 exec/s: 36 rss: 73Mb L: 14/20 MS: 1 ChangeBinInt- 00:06:34.174 #37 NEW cov: 12359 ft: 15203 corp: 27/292b lim: 20 exec/s: 37 rss: 73Mb L: 6/20 MS: 1 EraseBytes- 00:06:34.174 #38 NEW cov: 12359 ft: 15206 corp: 28/299b lim: 20 exec/s: 38 rss: 73Mb L: 7/20 MS: 1 EraseBytes- 00:06:34.174 #39 NEW cov: 12359 ft: 15267 corp: 29/305b lim: 20 exec/s: 39 rss: 74Mb L: 6/20 MS: 1 EraseBytes- 00:06:34.174 #40 NEW cov: 12359 ft: 15342 corp: 30/315b lim: 20 exec/s: 40 rss: 74Mb L: 10/20 MS: 1 CopyPart- 00:06:34.434 #41 NEW cov: 12359 ft: 15350 corp: 31/330b lim: 20 exec/s: 41 rss: 74Mb L: 15/20 MS: 1 InsertByte- 00:06:34.434 #42 NEW cov: 12359 ft: 15427 corp: 32/343b lim: 20 exec/s: 42 rss: 74Mb L: 13/20 MS: 1 ChangeBit- 00:06:34.434 #43 NEW cov: 12359 ft: 15431 corp: 33/363b lim: 20 exec/s: 43 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:34.434 #44 NEW cov: 12359 ft: 15438 corp: 34/378b lim: 20 exec/s: 44 rss: 74Mb L: 15/20 MS: 1 PersAutoDict- DE: "\377\022\021\244\024zk\324"- 00:06:34.434 [2024-07-15 13:52:12.396714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.434 [2024-07-15 13:52:12.396747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.434 #45 NEW cov: 12359 ft: 15446 corp: 35/388b lim: 20 exec/s: 45 rss: 74Mb L: 10/20 MS: 1 ChangeByte- 00:06:34.434 #46 NEW cov: 12359 ft: 15506 corp: 36/397b lim: 20 exec/s: 46 rss: 74Mb L: 9/20 MS: 1 ChangeByte- 00:06:34.694 #47 NEW cov: 12359 ft: 15511 corp: 37/410b lim: 20 exec/s: 47 rss: 74Mb L: 13/20 MS: 1 ChangeBit- 00:06:34.694 #48 NEW cov: 12359 ft: 15530 corp: 38/428b lim: 20 exec/s: 48 rss: 74Mb L: 18/20 MS: 1 InsertRepeatedBytes- 00:06:34.694 #49 NEW cov: 12359 ft: 15534 corp: 39/443b lim: 20 exec/s: 24 rss: 74Mb L: 15/20 MS: 1 ChangeByte- 00:06:34.694 #49 DONE cov: 12359 ft: 15534 corp: 39/443b lim: 20 exec/s: 24 rss: 74Mb 00:06:34.694 ###### Recommended dictionary. ###### 00:06:34.694 "\377\022\021\244\024zk\324" # Uses: 3 00:06:34.694 ###### End of recommended dictionary. ###### 00:06:34.694 Done 49 runs in 2 second(s) 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.694 13:52:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:34.953 [2024-07-15 13:52:12.781437] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:34.953 [2024-07-15 13:52:12.781509] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843198 ] 00:06:34.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.953 [2024-07-15 13:52:12.984496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.212 [2024-07-15 13:52:13.054423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.212 [2024-07-15 13:52:13.113712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.212 [2024-07-15 13:52:13.130023] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:35.212 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.212 INFO: Seed: 3621911391 00:06:35.212 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:35.212 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:35.212 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:35.212 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.212 #2 INITED exec/s: 0 rss: 65Mb 00:06:35.212 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:35.212 This may also happen if the target rejected all inputs we tried so far 00:06:35.212 [2024-07-15 13:52:13.200694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.212 [2024-07-15 13:52:13.200737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.212 [2024-07-15 13:52:13.200855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.212 [2024-07-15 13:52:13.200874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.471 NEW_FUNC[1/696]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:35.471 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:35.471 #5 NEW cov: 11900 ft: 11900 corp: 2/21b lim: 35 exec/s: 0 rss: 72Mb L: 20/20 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:35.730 [2024-07-15 13:52:13.561600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.730 [2024-07-15 13:52:13.561652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.730 [2024-07-15 13:52:13.561752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:09080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.730 [2024-07-15 13:52:13.561774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.730 #11 NEW cov: 12030 ft: 12476 corp: 3/41b lim: 35 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:06:35.730 [2024-07-15 13:52:13.631738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.730 [2024-07-15 13:52:13.631770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.730 [2024-07-15 13:52:13.631888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.730 [2024-07-15 13:52:13.631903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.730 #12 NEW cov: 12042 ft: 12661 corp: 4/61b lim: 35 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeByte- 00:06:35.730 [2024-07-15 13:52:13.681779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.730 [2024-07-15 13:52:13.681809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.730 #18 NEW cov: 12127 ft: 13599 corp: 5/74b lim: 35 exec/s: 0 rss: 73Mb L: 13/20 MS: 1 EraseBytes- 00:06:35.730 [2024-07-15 13:52:13.742256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.730 [2024-07-15 13:52:13.742284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.730 #19 NEW cov: 12127 ft: 13798 corp: 6/86b lim: 35 exec/s: 0 rss: 73Mb L: 12/20 MS: 1 EraseBytes- 00:06:35.990 [2024-07-15 13:52:13.802892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080828 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:13.802921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.990 [2024-07-15 13:52:13.803010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:13.803027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.990 #20 NEW cov: 12127 ft: 13856 corp: 7/106b lim: 35 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeBit- 00:06:35.990 [2024-07-15 13:52:13.852688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:13.852715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.990 #21 NEW cov: 12127 ft: 13923 corp: 8/119b lim: 35 exec/s: 0 rss: 73Mb L: 13/20 MS: 1 ShuffleBytes- 00:06:35.990 [2024-07-15 13:52:13.913922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:13.913949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.990 [2024-07-15 13:52:13.914039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:13.914056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.990 [2024-07-15 13:52:13.914154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:13.914170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.990 #22 NEW cov: 12127 ft: 14173 corp: 9/144b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CrossOver- 00:06:35.990 [2024-07-15 13:52:13.963313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:13130a13 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:13.963344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.990 #23 NEW cov: 12127 ft: 14233 corp: 10/153b lim: 35 exec/s: 0 rss: 73Mb L: 9/25 MS: 1 InsertRepeatedBytes- 00:06:35.990 [2024-07-15 13:52:14.014129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:14.014156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.990 [2024-07-15 13:52:14.014265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:09080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.990 [2024-07-15 13:52:14.014281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.990 #24 NEW cov: 12127 ft: 14277 corp: 11/173b lim: 35 exec/s: 0 rss: 73Mb L: 20/25 MS: 1 CrossOver- 00:06:36.249 [2024-07-15 13:52:14.063921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08680000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.063950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.250 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:36.250 #25 NEW cov: 12150 ft: 14344 corp: 12/186b lim: 35 exec/s: 0 rss: 73Mb L: 13/25 MS: 1 ChangeByte- 00:06:36.250 [2024-07-15 13:52:14.114171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.114200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.250 #26 NEW cov: 12150 ft: 14402 corp: 13/199b lim: 35 exec/s: 0 rss: 73Mb L: 13/25 MS: 1 ChangeBit- 00:06:36.250 [2024-07-15 13:52:14.174624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.174652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.250 [2024-07-15 13:52:14.174743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.174760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.250 #27 NEW cov: 12150 ft: 14442 corp: 14/218b lim: 35 exec/s: 27 rss: 73Mb L: 19/25 MS: 1 InsertRepeatedBytes- 00:06:36.250 [2024-07-15 13:52:14.235673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.235700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.250 [2024-07-15 13:52:14.235803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.235820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.250 [2024-07-15 13:52:14.235914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:08000808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.235931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.250 #28 NEW cov: 12150 ft: 14490 corp: 15/243b lim: 35 exec/s: 28 rss: 73Mb L: 25/25 MS: 1 ChangeByte- 00:06:36.250 [2024-07-15 13:52:14.295448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.295476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.250 [2024-07-15 13:52:14.295578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.250 [2024-07-15 13:52:14.295594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.509 #29 NEW cov: 12150 ft: 14534 corp: 16/262b lim: 35 exec/s: 29 rss: 73Mb L: 19/25 MS: 1 ChangeBit- 00:06:36.509 [2024-07-15 13:52:14.355473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.509 [2024-07-15 13:52:14.355500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.509 #30 NEW cov: 12150 ft: 14560 corp: 17/275b lim: 35 exec/s: 30 rss: 73Mb L: 13/25 MS: 1 ChangeByte- 00:06:36.509 [2024-07-15 13:52:14.416046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:13130a13 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.509 [2024-07-15 13:52:14.416074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.509 #31 NEW cov: 12150 ft: 14585 corp: 18/284b lim: 35 exec/s: 31 rss: 73Mb L: 9/25 MS: 1 CopyPart- 00:06:36.509 [2024-07-15 13:52:14.476895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00bc0800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.509 [2024-07-15 13:52:14.476922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.509 [2024-07-15 13:52:14.477029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.509 [2024-07-15 13:52:14.477045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.509 #32 NEW cov: 12150 ft: 14605 corp: 19/303b lim: 35 exec/s: 32 rss: 74Mb L: 19/25 MS: 1 ChangeByte- 00:06:36.509 [2024-07-15 13:52:14.536950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.509 [2024-07-15 13:52:14.536979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.509 #33 NEW cov: 12150 ft: 14613 corp: 20/316b lim: 35 exec/s: 33 rss: 74Mb L: 13/25 MS: 1 ShuffleBytes- 00:06:36.769 [2024-07-15 13:52:14.587100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00bc0800 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.769 [2024-07-15 13:52:14.587130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.769 #34 NEW cov: 12150 ft: 14627 corp: 21/326b lim: 35 exec/s: 34 rss: 74Mb L: 10/25 MS: 1 EraseBytes- 00:06:36.769 [2024-07-15 13:52:14.657442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.769 [2024-07-15 13:52:14.657488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.769 #35 NEW cov: 12150 ft: 14643 corp: 22/338b lim: 35 exec/s: 35 rss: 74Mb L: 12/25 MS: 1 ChangeByte- 00:06:36.769 [2024-07-15 13:52:14.727972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00bc0800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.769 [2024-07-15 13:52:14.728001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.769 [2024-07-15 13:52:14.728100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.769 [2024-07-15 13:52:14.728116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.769 #36 NEW cov: 12150 ft: 14645 corp: 23/355b lim: 35 exec/s: 36 rss: 74Mb L: 17/25 MS: 1 EraseBytes- 00:06:36.769 [2024-07-15 13:52:14.778094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08280808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.769 [2024-07-15 13:52:14.778122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.769 [2024-07-15 13:52:14.778229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:08080808 cdw11:28080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.769 [2024-07-15 13:52:14.778246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.769 #37 NEW cov: 12150 ft: 14653 corp: 24/374b lim: 35 exec/s: 37 rss: 74Mb L: 19/25 MS: 1 CopyPart- 00:06:37.028 [2024-07-15 13:52:14.848353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:f8f70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.028 [2024-07-15 13:52:14.848382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.028 #38 NEW cov: 12150 ft: 14655 corp: 25/387b lim: 35 exec/s: 38 rss: 74Mb L: 13/25 MS: 1 ChangeBinInt- 00:06:37.028 [2024-07-15 13:52:14.898458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.028 [2024-07-15 13:52:14.898488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.028 #39 NEW cov: 12150 ft: 14683 corp: 26/400b lim: 35 exec/s: 39 rss: 74Mb L: 13/25 MS: 1 ChangeBit- 00:06:37.028 [2024-07-15 13:52:14.968715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08ff0808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.028 [2024-07-15 13:52:14.968742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.028 #40 NEW cov: 12150 ft: 14698 corp: 27/412b lim: 35 exec/s: 40 rss: 74Mb L: 12/25 MS: 1 ShuffleBytes- 00:06:37.028 [2024-07-15 13:52:15.028875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:08080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.028 [2024-07-15 13:52:15.028903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.029 #41 NEW cov: 12150 ft: 14718 corp: 28/424b lim: 35 exec/s: 41 rss: 74Mb L: 12/25 MS: 1 ChangeBit- 00:06:37.029 [2024-07-15 13:52:15.079115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0800 cdw11:ff000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.029 [2024-07-15 13:52:15.079142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.288 #42 NEW cov: 12150 ft: 14798 corp: 29/437b lim: 35 exec/s: 42 rss: 74Mb L: 13/25 MS: 1 InsertRepeatedBytes- 00:06:37.288 [2024-07-15 13:52:15.139835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08080808 cdw11:f8f70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.288 [2024-07-15 13:52:15.139862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.288 [2024-07-15 13:52:15.139955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0808fd08 cdw11:087e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.288 [2024-07-15 13:52:15.139975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.288 #43 NEW cov: 12150 ft: 14801 corp: 30/451b lim: 35 exec/s: 21 rss: 74Mb L: 14/25 MS: 1 InsertByte- 00:06:37.288 #43 DONE cov: 12150 ft: 14801 corp: 30/451b lim: 35 exec/s: 21 rss: 74Mb 00:06:37.288 Done 43 runs in 2 second(s) 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.288 13:52:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:37.288 [2024-07-15 13:52:15.344995] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:37.288 [2024-07-15 13:52:15.345069] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843573 ] 00:06:37.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.548 [2024-07-15 13:52:15.557370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.808 [2024-07-15 13:52:15.627303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.808 [2024-07-15 13:52:15.686662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.808 [2024-07-15 13:52:15.702942] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:37.808 INFO: Running with entropic power schedule (0xFF, 100). 00:06:37.808 INFO: Seed: 1899929670 00:06:37.808 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:37.808 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:37.808 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:37.808 INFO: A corpus is not provided, starting from an empty corpus 00:06:37.808 #2 INITED exec/s: 0 rss: 65Mb 00:06:37.808 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:37.808 This may also happen if the target rejected all inputs we tried so far 00:06:37.808 [2024-07-15 13:52:15.773778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.808 [2024-07-15 13:52:15.773818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.808 [2024-07-15 13:52:15.773924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.808 [2024-07-15 13:52:15.773942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.067 NEW_FUNC[1/696]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:38.067 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:38.067 #8 NEW cov: 11916 ft: 11913 corp: 2/22b lim: 45 exec/s: 0 rss: 71Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:06:38.067 [2024-07-15 13:52:16.134378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a0a cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.067 [2024-07-15 13:52:16.134424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.067 [2024-07-15 13:52:16.134519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.067 [2024-07-15 13:52:16.134540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.327 #9 NEW cov: 12047 ft: 12539 corp: 3/44b lim: 45 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 CrossOver- 00:06:38.327 [2024-07-15 13:52:16.184540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.184569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.327 [2024-07-15 13:52:16.184663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.184679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.327 #10 NEW cov: 12053 ft: 12849 corp: 4/67b lim: 45 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 CMP- DE: "\035\000"- 00:06:38.327 [2024-07-15 13:52:16.255132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a0a cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.255162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.327 [2024-07-15 13:52:16.255248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.255265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.327 [2024-07-15 13:52:16.255343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.255357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.327 #11 NEW cov: 12138 ft: 13411 corp: 5/96b lim: 45 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:38.327 [2024-07-15 13:52:16.315416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.315444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.327 [2024-07-15 13:52:16.315532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.315547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.327 [2024-07-15 13:52:16.315630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.315645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.327 #12 NEW cov: 12138 ft: 13507 corp: 6/127b lim: 45 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 CopyPart- 00:06:38.327 [2024-07-15 13:52:16.375261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.375287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.327 [2024-07-15 13:52:16.375372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.327 [2024-07-15 13:52:16.375388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.327 #13 NEW cov: 12138 ft: 13585 corp: 7/150b lim: 45 exec/s: 0 rss: 72Mb L: 23/31 MS: 1 ChangeBit- 00:06:38.586 [2024-07-15 13:52:16.425428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.425454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.586 [2024-07-15 13:52:16.425538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76ba0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.425554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.586 #14 NEW cov: 12138 ft: 13630 corp: 8/172b lim: 45 exec/s: 0 rss: 73Mb L: 22/31 MS: 1 InsertByte- 00:06:38.586 [2024-07-15 13:52:16.475602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a761d00 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.475626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.586 [2024-07-15 13:52:16.475714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.475732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.586 #15 NEW cov: 12138 ft: 13676 corp: 9/195b lim: 45 exec/s: 0 rss: 73Mb L: 23/31 MS: 1 PersAutoDict- DE: "\035\000"- 00:06:38.586 [2024-07-15 13:52:16.525911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.525937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.586 [2024-07-15 13:52:16.526027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.526042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.586 #16 NEW cov: 12138 ft: 13707 corp: 10/219b lim: 45 exec/s: 0 rss: 73Mb L: 24/31 MS: 1 InsertByte- 00:06:38.586 [2024-07-15 13:52:16.586461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.586487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.586 [2024-07-15 13:52:16.586583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:2e767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.586599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.586 [2024-07-15 13:52:16.586690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.586705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.586 #17 NEW cov: 12138 ft: 13788 corp: 11/250b lim: 45 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 ChangeByte- 00:06:38.586 [2024-07-15 13:52:16.646302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.646328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.586 [2024-07-15 13:52:16.646412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.586 [2024-07-15 13:52:16.646427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.846 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:38.846 #18 NEW cov: 12161 ft: 13818 corp: 12/273b lim: 45 exec/s: 0 rss: 73Mb L: 23/31 MS: 1 CrossOver- 00:06:38.846 [2024-07-15 13:52:16.696558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.846 [2024-07-15 13:52:16.696583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.846 [2024-07-15 13:52:16.696665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.846 [2024-07-15 13:52:16.696682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.846 #19 NEW cov: 12161 ft: 13828 corp: 13/294b lim: 45 exec/s: 0 rss: 73Mb L: 21/31 MS: 1 PersAutoDict- DE: "\035\000"- 00:06:38.846 [2024-07-15 13:52:16.746260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a0a cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.846 [2024-07-15 13:52:16.746286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.846 #20 NEW cov: 12161 ft: 14567 corp: 14/311b lim: 45 exec/s: 20 rss: 73Mb L: 17/31 MS: 1 EraseBytes- 00:06:38.846 [2024-07-15 13:52:16.797234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.846 [2024-07-15 13:52:16.797260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.846 [2024-07-15 13:52:16.797363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76a00005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.846 [2024-07-15 13:52:16.797380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.847 [2024-07-15 13:52:16.797468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a0a0a0a0 cdw11:a0a00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.847 [2024-07-15 13:52:16.797491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.847 #21 NEW cov: 12161 ft: 14588 corp: 15/344b lim: 45 exec/s: 21 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:38.847 [2024-07-15 13:52:16.847401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.847 [2024-07-15 13:52:16.847429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.847 [2024-07-15 13:52:16.847529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76a00005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.847 [2024-07-15 13:52:16.847547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.847 [2024-07-15 13:52:16.847639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a0a0a0a0 cdw11:a0a00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.847 [2024-07-15 13:52:16.847656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.847 #22 NEW cov: 12161 ft: 14602 corp: 16/377b lim: 45 exec/s: 22 rss: 73Mb L: 33/33 MS: 1 ShuffleBytes- 00:06:38.847 [2024-07-15 13:52:16.917390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a761d00 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.847 [2024-07-15 13:52:16.917417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.847 [2024-07-15 13:52:16.917509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76720003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.847 [2024-07-15 13:52:16.917525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.106 #23 NEW cov: 12161 ft: 14678 corp: 17/400b lim: 45 exec/s: 23 rss: 73Mb L: 23/33 MS: 1 ChangeBit- 00:06:39.106 [2024-07-15 13:52:16.977516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760af6 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:16.977541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.106 [2024-07-15 13:52:16.977627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:16.977642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.106 #24 NEW cov: 12161 ft: 14688 corp: 18/421b lim: 45 exec/s: 24 rss: 73Mb L: 21/33 MS: 1 ChangeBit- 00:06:39.106 [2024-07-15 13:52:17.028181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:17.028205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.106 [2024-07-15 13:52:17.028293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76a00005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:17.028309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.106 [2024-07-15 13:52:17.028389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a0a0a0a0 cdw11:a0760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:17.028405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.106 #25 NEW cov: 12161 ft: 14704 corp: 19/453b lim: 45 exec/s: 25 rss: 73Mb L: 32/33 MS: 1 EraseBytes- 00:06:39.106 [2024-07-15 13:52:17.088013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:15760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:17.088038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.106 [2024-07-15 13:52:17.088115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:17.088131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.106 #26 NEW cov: 12161 ft: 14708 corp: 20/474b lim: 45 exec/s: 26 rss: 73Mb L: 21/33 MS: 1 ChangeBinInt- 00:06:39.106 [2024-07-15 13:52:17.137660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a0a cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.106 [2024-07-15 13:52:17.137686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.106 #27 NEW cov: 12161 ft: 14724 corp: 21/487b lim: 45 exec/s: 27 rss: 73Mb L: 13/33 MS: 1 EraseBytes- 00:06:39.366 [2024-07-15 13:52:17.198572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:cf000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.198599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.366 [2024-07-15 13:52:17.198686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.198702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.366 [2024-07-15 13:52:17.198792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.198807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.366 #28 NEW cov: 12161 ft: 14741 corp: 22/518b lim: 45 exec/s: 28 rss: 73Mb L: 31/33 MS: 1 ChangeByte- 00:06:39.366 [2024-07-15 13:52:17.248483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:001d0000 cdw11:000a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.248510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.366 [2024-07-15 13:52:17.248609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.248625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.366 #29 NEW cov: 12161 ft: 14768 corp: 23/544b lim: 45 exec/s: 29 rss: 73Mb L: 26/33 MS: 1 InsertRepeatedBytes- 00:06:39.366 [2024-07-15 13:52:17.298669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.298696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.366 [2024-07-15 13:52:17.298782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:768a7676 cdw11:89890004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.298797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.366 #30 NEW cov: 12161 ft: 14780 corp: 24/565b lim: 45 exec/s: 30 rss: 73Mb L: 21/33 MS: 1 ChangeBinInt- 00:06:39.366 [2024-07-15 13:52:17.358834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.358863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.366 [2024-07-15 13:52:17.358955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:768a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.358971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.366 #31 NEW cov: 12161 ft: 14820 corp: 25/588b lim: 45 exec/s: 31 rss: 73Mb L: 23/33 MS: 1 PersAutoDict- DE: "\035\000"- 00:06:39.366 [2024-07-15 13:52:17.419120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.419146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.366 [2024-07-15 13:52:17.419242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.366 [2024-07-15 13:52:17.419260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.625 #37 NEW cov: 12161 ft: 14828 corp: 26/611b lim: 45 exec/s: 37 rss: 73Mb L: 23/33 MS: 1 ShuffleBytes- 00:06:39.625 [2024-07-15 13:52:17.469996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.470024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.470113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.470130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.470215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.470237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.470321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.470337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.625 #38 NEW cov: 12161 ft: 15178 corp: 27/650b lim: 45 exec/s: 38 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:06:39.625 [2024-07-15 13:52:17.519933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.519961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.520047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76a00005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.520063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.520147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a0a0a0a0 cdw11:a0760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.520163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.625 #39 NEW cov: 12161 ft: 15212 corp: 28/682b lim: 45 exec/s: 39 rss: 73Mb L: 32/39 MS: 1 CrossOver- 00:06:39.625 [2024-07-15 13:52:17.589748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:1d000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.589776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.589866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:760a7676 cdw11:0a760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.589882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.625 #40 NEW cov: 12161 ft: 15225 corp: 29/706b lim: 45 exec/s: 40 rss: 73Mb L: 24/39 MS: 1 CrossOver- 00:06:39.625 [2024-07-15 13:52:17.640327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:73010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.640354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.640456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1d007676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.640473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.625 [2024-07-15 13:52:17.640567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:89898a89 cdw11:89890007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.625 [2024-07-15 13:52:17.640583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.625 #41 NEW cov: 12161 ft: 15283 corp: 30/733b lim: 45 exec/s: 41 rss: 74Mb L: 27/39 MS: 1 CMP- DE: "s\001\000\000"- 00:06:39.885 [2024-07-15 13:52:17.710280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76760a76 cdw11:76760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.885 [2024-07-15 13:52:17.710308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.885 [2024-07-15 13:52:17.710400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.885 [2024-07-15 13:52:17.710415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.885 #42 NEW cov: 12161 ft: 15320 corp: 31/754b lim: 45 exec/s: 42 rss: 74Mb L: 21/39 MS: 1 ChangeBinInt- 00:06:39.885 [2024-07-15 13:52:17.760089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a761d00 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.885 [2024-07-15 13:52:17.760116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.885 #43 NEW cov: 12161 ft: 15341 corp: 32/770b lim: 45 exec/s: 21 rss: 74Mb L: 16/39 MS: 1 EraseBytes- 00:06:39.885 #43 DONE cov: 12161 ft: 15341 corp: 32/770b lim: 45 exec/s: 21 rss: 74Mb 00:06:39.885 ###### Recommended dictionary. ###### 00:06:39.885 "\035\000" # Uses: 3 00:06:39.885 "s\001\000\000" # Uses: 0 00:06:39.885 ###### End of recommended dictionary. ###### 00:06:39.885 Done 43 runs in 2 second(s) 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.885 13:52:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:40.145 [2024-07-15 13:52:17.962744] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:40.145 [2024-07-15 13:52:17.962828] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843941 ] 00:06:40.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.145 [2024-07-15 13:52:18.172558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.404 [2024-07-15 13:52:18.242752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.404 [2024-07-15 13:52:18.302262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.404 [2024-07-15 13:52:18.318561] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:40.404 INFO: Running with entropic power schedule (0xFF, 100). 00:06:40.404 INFO: Seed: 222963497 00:06:40.404 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:40.404 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:40.404 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:40.404 INFO: A corpus is not provided, starting from an empty corpus 00:06:40.404 #2 INITED exec/s: 0 rss: 65Mb 00:06:40.404 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:40.404 This may also happen if the target rejected all inputs we tried so far 00:06:40.404 [2024-07-15 13:52:18.383856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2c cdw11:00000000 00:06:40.404 [2024-07-15 13:52:18.383885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.663 NEW_FUNC[1/694]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:40.663 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:40.663 #6 NEW cov: 11834 ft: 11831 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 4 CopyPart-ChangeByte-ChangeByte-CrossOver- 00:06:40.663 [2024-07-15 13:52:18.724867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:40.663 [2024-07-15 13:52:18.724927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.922 #7 NEW cov: 11964 ft: 12454 corp: 3/6b lim: 10 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 CrossOver- 00:06:40.922 [2024-07-15 13:52:18.785002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:40.922 [2024-07-15 13:52:18.785031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.922 [2024-07-15 13:52:18.785085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.922 [2024-07-15 13:52:18.785099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.922 [2024-07-15 13:52:18.785152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.922 [2024-07-15 13:52:18.785166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.922 #8 NEW cov: 11970 ft: 13047 corp: 4/12b lim: 10 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:40.922 [2024-07-15 13:52:18.824878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2c cdw11:00000000 00:06:40.922 [2024-07-15 13:52:18.824904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.922 #9 NEW cov: 12055 ft: 13384 corp: 5/15b lim: 10 exec/s: 0 rss: 71Mb L: 3/6 MS: 1 CrossOver- 00:06:40.922 [2024-07-15 13:52:18.864995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000002c cdw11:00000000 00:06:40.922 [2024-07-15 13:52:18.865021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.922 #12 NEW cov: 12055 ft: 13465 corp: 6/17b lim: 10 exec/s: 0 rss: 71Mb L: 2/6 MS: 3 EraseBytes-CopyPart-CrossOver- 00:06:40.922 [2024-07-15 13:52:18.905344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:40.922 [2024-07-15 13:52:18.905369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.922 [2024-07-15 13:52:18.905423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:40.923 [2024-07-15 13:52:18.905437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.923 [2024-07-15 13:52:18.905489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 00:06:40.923 [2024-07-15 13:52:18.905503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.923 #13 NEW cov: 12055 ft: 13518 corp: 7/24b lim: 10 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 CMP- DE: "\001\000\000\010"- 00:06:40.923 [2024-07-15 13:52:18.955260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 00:06:40.923 [2024-07-15 13:52:18.955286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.923 #14 NEW cov: 12055 ft: 13694 corp: 8/26b lim: 10 exec/s: 0 rss: 71Mb L: 2/7 MS: 1 ShuffleBytes- 00:06:41.182 [2024-07-15 13:52:18.995610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2c cdw11:00000000 00:06:41.182 [2024-07-15 13:52:18.995637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.182 [2024-07-15 13:52:18.995696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002c01 cdw11:00000000 00:06:41.182 [2024-07-15 13:52:18.995709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.182 [2024-07-15 13:52:18.995766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.182 [2024-07-15 13:52:18.995780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.182 #15 NEW cov: 12055 ft: 13772 corp: 9/33b lim: 10 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 PersAutoDict- DE: "\001\000\000\010"- 00:06:41.182 [2024-07-15 13:52:19.045856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.182 [2024-07-15 13:52:19.045882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.182 [2024-07-15 13:52:19.045951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.182 [2024-07-15 13:52:19.045966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.182 [2024-07-15 13:52:19.046020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.182 [2024-07-15 13:52:19.046033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.182 [2024-07-15 13:52:19.046087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a2c cdw11:00000000 00:06:41.182 [2024-07-15 13:52:19.046100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.182 #16 NEW cov: 12055 ft: 14056 corp: 10/41b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:41.182 [2024-07-15 13:52:19.085666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:41.182 [2024-07-15 13:52:19.085691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.182 [2024-07-15 13:52:19.085761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000082c cdw11:00000000 00:06:41.182 [2024-07-15 13:52:19.085775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.183 #17 NEW cov: 12055 ft: 14259 corp: 11/45b lim: 10 exec/s: 0 rss: 72Mb L: 4/8 MS: 1 EraseBytes- 00:06:41.183 [2024-07-15 13:52:19.136193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.136226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.183 [2024-07-15 13:52:19.136297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ba09 cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.136311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.183 [2024-07-15 13:52:19.136366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ba cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.136379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.183 [2024-07-15 13:52:19.136433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.136447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.183 [2024-07-15 13:52:19.136500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a2c cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.136513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.183 #18 NEW cov: 12055 ft: 14346 corp: 12/55b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "\011\000"- 00:06:41.183 [2024-07-15 13:52:19.186105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.186131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.183 [2024-07-15 13:52:19.186183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.186197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.183 [2024-07-15 13:52:19.186251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002c cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.186265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.183 #19 NEW cov: 12055 ft: 14423 corp: 13/61b lim: 10 exec/s: 0 rss: 72Mb L: 6/10 MS: 1 ChangeByte- 00:06:41.183 [2024-07-15 13:52:19.236088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7d cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.236112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.183 [2024-07-15 13:52:19.236166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007d7d cdw11:00000000 00:06:41.183 [2024-07-15 13:52:19.236180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.442 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:41.442 #20 NEW cov: 12078 ft: 14461 corp: 14/66b lim: 10 exec/s: 0 rss: 72Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:06:41.442 [2024-07-15 13:52:19.276089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a11 cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.276115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.442 #21 NEW cov: 12078 ft: 14483 corp: 15/68b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 InsertByte- 00:06:41.442 [2024-07-15 13:52:19.316225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7e cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.316252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.442 #22 NEW cov: 12078 ft: 14584 corp: 16/71b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 ChangeByte- 00:06:41.442 [2024-07-15 13:52:19.356443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000400 cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.356468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.442 [2024-07-15 13:52:19.356522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.356536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.442 #23 NEW cov: 12078 ft: 14603 corp: 17/75b lim: 10 exec/s: 23 rss: 72Mb L: 4/10 MS: 1 ChangeBinInt- 00:06:41.442 [2024-07-15 13:52:19.406856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002cba cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.406881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.442 [2024-07-15 13:52:19.406953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.406967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.442 [2024-07-15 13:52:19.407023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.407036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.442 [2024-07-15 13:52:19.407091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ba0a cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.407104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.442 #24 NEW cov: 12078 ft: 14622 corp: 18/84b lim: 10 exec/s: 24 rss: 72Mb L: 9/10 MS: 1 CopyPart- 00:06:41.442 [2024-07-15 13:52:19.446695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7d cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.446719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.442 [2024-07-15 13:52:19.446774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007d7e cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.446788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.442 #25 NEW cov: 12078 ft: 14641 corp: 19/89b lim: 10 exec/s: 25 rss: 72Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:41.442 [2024-07-15 13:52:19.496724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.442 [2024-07-15 13:52:19.496748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.701 #26 NEW cov: 12078 ft: 14648 corp: 20/92b lim: 10 exec/s: 26 rss: 72Mb L: 3/10 MS: 1 ChangeBinInt- 00:06:41.701 [2024-07-15 13:52:19.537037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2c cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.537062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.701 [2024-07-15 13:52:19.537129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d101 cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.537144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.701 [2024-07-15 13:52:19.537197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.537211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.701 #27 NEW cov: 12078 ft: 14688 corp: 21/99b lim: 10 exec/s: 27 rss: 72Mb L: 7/10 MS: 1 ChangeBinInt- 00:06:41.701 [2024-07-15 13:52:19.586936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.586961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.701 #28 NEW cov: 12078 ft: 14695 corp: 22/101b lim: 10 exec/s: 28 rss: 72Mb L: 2/10 MS: 1 ChangeByte- 00:06:41.701 [2024-07-15 13:52:19.627300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.627324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.701 [2024-07-15 13:52:19.627378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.627392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.701 [2024-07-15 13:52:19.627447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.627460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.701 #29 NEW cov: 12078 ft: 14723 corp: 23/108b lim: 10 exec/s: 29 rss: 72Mb L: 7/10 MS: 1 ChangeBit- 00:06:41.701 [2024-07-15 13:52:19.667197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000d4 cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.667227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.701 #30 NEW cov: 12078 ft: 14790 corp: 24/110b lim: 10 exec/s: 30 rss: 72Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:41.701 [2024-07-15 13:52:19.717352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000022c cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.717395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.701 #31 NEW cov: 12078 ft: 14820 corp: 25/112b lim: 10 exec/s: 31 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:06:41.701 [2024-07-15 13:52:19.757762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002cba cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.757787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.701 [2024-07-15 13:52:19.757841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.757855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.701 [2024-07-15 13:52:19.757909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000baba cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.757922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.701 [2024-07-15 13:52:19.757976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fa0a cdw11:00000000 00:06:41.701 [2024-07-15 13:52:19.757989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.961 #32 NEW cov: 12078 ft: 14857 corp: 26/121b lim: 10 exec/s: 32 rss: 72Mb L: 9/10 MS: 1 ChangeBit- 00:06:41.961 [2024-07-15 13:52:19.807529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.807553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.961 #33 NEW cov: 12078 ft: 14884 corp: 27/124b lim: 10 exec/s: 33 rss: 72Mb L: 3/10 MS: 1 ChangeBit- 00:06:41.961 [2024-07-15 13:52:19.847904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.847929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.847999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000001a3 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.848013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.848067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.848081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.961 #34 NEW cov: 12078 ft: 14890 corp: 28/131b lim: 10 exec/s: 34 rss: 73Mb L: 7/10 MS: 1 ChangeByte- 00:06:41.961 [2024-07-15 13:52:19.898163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.898187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.898247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000006c cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.898262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.898316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006c6c cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.898329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.898380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.898393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.961 #35 NEW cov: 12078 ft: 14907 corp: 29/140b lim: 10 exec/s: 35 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:41.961 [2024-07-15 13:52:19.938251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.938276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.938328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.938341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.938394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a7d cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.938407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:19.938459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007d7e cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.938472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.961 #36 NEW cov: 12078 ft: 14911 corp: 30/149b lim: 10 exec/s: 36 rss: 73Mb L: 9/10 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:41.961 [2024-07-15 13:52:19.988095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000abc cdw11:00000000 00:06:41.961 [2024-07-15 13:52:19.988119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.961 #37 NEW cov: 12078 ft: 14927 corp: 31/151b lim: 10 exec/s: 37 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:06:41.961 [2024-07-15 13:52:20.028615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:06:41.961 [2024-07-15 13:52:20.028641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:20.028697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002cd1 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:20.028711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:20.028764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:20.028779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.961 [2024-07-15 13:52:20.028831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:06:41.961 [2024-07-15 13:52:20.028845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.218 #38 NEW cov: 12078 ft: 14954 corp: 32/159b lim: 10 exec/s: 38 rss: 73Mb L: 8/10 MS: 1 InsertByte- 00:06:42.218 [2024-07-15 13:52:20.088384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:42.218 [2024-07-15 13:52:20.088414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.218 #39 NEW cov: 12078 ft: 14966 corp: 33/161b lim: 10 exec/s: 39 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:06:42.218 [2024-07-15 13:52:20.128645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000090a cdw11:00000000 00:06:42.218 [2024-07-15 13:52:20.128672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.218 [2024-07-15 13:52:20.128725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.218 [2024-07-15 13:52:20.128739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.219 [2024-07-15 13:52:20.128790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.128804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.219 #40 NEW cov: 12078 ft: 14981 corp: 34/168b lim: 10 exec/s: 40 rss: 73Mb L: 7/10 MS: 1 CrossOver- 00:06:42.219 [2024-07-15 13:52:20.168898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.168923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.219 [2024-07-15 13:52:20.168978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001211 cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.168991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.219 [2024-07-15 13:52:20.169045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a3d8 cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.169058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.219 [2024-07-15 13:52:20.169112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e775 cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.169125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.219 #41 NEW cov: 12078 ft: 15008 corp: 35/177b lim: 10 exec/s: 41 rss: 74Mb L: 9/10 MS: 1 CMP- DE: "\377\022\021\243\330\347u\350"- 00:06:42.219 [2024-07-15 13:52:20.218792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.218818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.219 [2024-07-15 13:52:20.218875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.218890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.219 #42 NEW cov: 12078 ft: 15020 corp: 36/181b lim: 10 exec/s: 42 rss: 74Mb L: 4/10 MS: 1 CopyPart- 00:06:42.219 [2024-07-15 13:52:20.268762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f2d3 cdw11:00000000 00:06:42.219 [2024-07-15 13:52:20.268788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.219 #43 NEW cov: 12078 ft: 15033 corp: 37/183b lim: 10 exec/s: 43 rss: 74Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:42.477 [2024-07-15 13:52:20.308899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7e cdw11:00000000 00:06:42.477 [2024-07-15 13:52:20.308932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.477 #44 NEW cov: 12078 ft: 15083 corp: 38/185b lim: 10 exec/s: 44 rss: 74Mb L: 2/10 MS: 1 EraseBytes- 00:06:42.477 [2024-07-15 13:52:20.359423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.477 [2024-07-15 13:52:20.359450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.477 [2024-07-15 13:52:20.359519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007d7d cdw11:00000000 00:06:42.477 [2024-07-15 13:52:20.359533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.477 [2024-07-15 13:52:20.359588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007d7d cdw11:00000000 00:06:42.477 [2024-07-15 13:52:20.359602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.477 [2024-07-15 13:52:20.359659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007d7d cdw11:00000000 00:06:42.477 [2024-07-15 13:52:20.359672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.477 #45 NEW cov: 12078 ft: 15092 corp: 39/193b lim: 10 exec/s: 22 rss: 74Mb L: 8/10 MS: 1 CopyPart- 00:06:42.477 #45 DONE cov: 12078 ft: 15092 corp: 39/193b lim: 10 exec/s: 22 rss: 74Mb 00:06:42.477 ###### Recommended dictionary. ###### 00:06:42.477 "\001\000\000\010" # Uses: 1 00:06:42.477 "\011\000" # Uses: 0 00:06:42.477 "\000\000\000\000" # Uses: 0 00:06:42.477 "\377\022\021\243\330\347u\350" # Uses: 0 00:06:42.477 ###### End of recommended dictionary. ###### 00:06:42.477 Done 45 runs in 2 second(s) 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.477 13:52:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:42.735 [2024-07-15 13:52:20.549695] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:42.735 [2024-07-15 13:52:20.549781] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844327 ] 00:06:42.735 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.735 [2024-07-15 13:52:20.764502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.994 [2024-07-15 13:52:20.834699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.994 [2024-07-15 13:52:20.894328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.994 [2024-07-15 13:52:20.910590] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:42.994 INFO: Running with entropic power schedule (0xFF, 100). 00:06:42.994 INFO: Seed: 2812986635 00:06:42.994 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:42.994 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:42.994 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:42.994 INFO: A corpus is not provided, starting from an empty corpus 00:06:42.994 #2 INITED exec/s: 0 rss: 65Mb 00:06:42.994 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:42.994 This may also happen if the target rejected all inputs we tried so far 00:06:42.994 [2024-07-15 13:52:20.969940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.994 [2024-07-15 13:52:20.969968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.994 [2024-07-15 13:52:20.970034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.994 [2024-07-15 13:52:20.970048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.994 [2024-07-15 13:52:20.970095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.994 [2024-07-15 13:52:20.970108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.994 [2024-07-15 13:52:20.970160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.994 [2024-07-15 13:52:20.970173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.252 NEW_FUNC[1/694]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:43.252 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:43.252 #5 NEW cov: 11814 ft: 11834 corp: 2/10b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:06:43.252 [2024-07-15 13:52:21.311384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.252 [2024-07-15 13:52:21.311469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.252 [2024-07-15 13:52:21.311578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.252 [2024-07-15 13:52:21.311617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.252 [2024-07-15 13:52:21.311722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000efff cdw11:00000000 00:06:43.252 [2024-07-15 13:52:21.311759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.252 [2024-07-15 13:52:21.311872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.252 [2024-07-15 13:52:21.311911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.511 #6 NEW cov: 11964 ft: 12437 corp: 3/19b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:06:43.511 [2024-07-15 13:52:21.370952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.370981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.371050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.371064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.371113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.371127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.371178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.371191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.511 #7 NEW cov: 11970 ft: 12768 corp: 4/28b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:43.511 [2024-07-15 13:52:21.410717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a30 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.410742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.511 #8 NEW cov: 12055 ft: 13402 corp: 5/30b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 InsertByte- 00:06:43.511 [2024-07-15 13:52:21.451214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.451245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.451298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.451312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.451362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.451376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.451428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.451441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.511 #9 NEW cov: 12055 ft: 13537 corp: 6/39b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:43.511 [2024-07-15 13:52:21.491330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.491356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.491409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.491424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.491477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.491491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.491545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.491559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.511 #10 NEW cov: 12055 ft: 13623 corp: 7/48b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:06:43.511 [2024-07-15 13:52:21.541459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000058bf cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.541486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.541539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.541553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.541606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.511 [2024-07-15 13:52:21.541619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.511 [2024-07-15 13:52:21.541672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.512 [2024-07-15 13:52:21.541685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.512 #11 NEW cov: 12055 ft: 13758 corp: 8/57b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:06:43.771 [2024-07-15 13:52:21.591588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.591615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.591682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.591696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.591747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000efff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.591760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.591814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.591827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.771 #12 NEW cov: 12055 ft: 13823 corp: 9/66b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:06:43.771 [2024-07-15 13:52:21.641753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000058bf cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.641779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.641846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.641860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.641917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000581e cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.641930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.641982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.641995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.771 #13 NEW cov: 12055 ft: 13877 corp: 10/75b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:06:43.771 [2024-07-15 13:52:21.691883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000058bf cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.691908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.691959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.691972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.692022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000bf58 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.692036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.692086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.692099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.771 #14 NEW cov: 12055 ft: 13925 corp: 11/84b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:06:43.771 [2024-07-15 13:52:21.731952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.731978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.732033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.732047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.732098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.732112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.732161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.732175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.771 #15 NEW cov: 12055 ft: 13974 corp: 12/93b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:43.771 [2024-07-15 13:52:21.772187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.772212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.772269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.772282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.772334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000efff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.772350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.772401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.772413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.772466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000b58 cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.772479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.771 #16 NEW cov: 12055 ft: 14039 corp: 13/103b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:06:43.771 [2024-07-15 13:52:21.822355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.822381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.822451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.822464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.822518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000efff cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.822530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.822581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff2c cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.822595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.771 [2024-07-15 13:52:21.822647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:43.771 [2024-07-15 13:52:21.822660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.032 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:44.032 #17 NEW cov: 12078 ft: 14058 corp: 14/113b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 InsertByte- 00:06:44.032 [2024-07-15 13:52:21.862463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.862487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.862553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffef cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.862567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.862618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.862631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.862683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.862696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.862748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.862765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.032 #18 NEW cov: 12078 ft: 14066 corp: 15/123b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:44.032 [2024-07-15 13:52:21.912562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.912587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.912656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff2f cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.912670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.912722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffef cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.912735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.912786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.912799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.912852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.912865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.032 #19 NEW cov: 12078 ft: 14108 corp: 16/133b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:06:44.032 [2024-07-15 13:52:21.952718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.952742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.952795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.952809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.952862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.952875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.952930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.952943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:21.952995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000580a cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.953008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.032 #20 NEW cov: 12078 ft: 14123 corp: 17/143b lim: 10 exec/s: 20 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:44.032 [2024-07-15 13:52:21.992336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:21.992360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.032 #21 NEW cov: 12078 ft: 14189 corp: 18/145b lim: 10 exec/s: 21 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:06:44.032 [2024-07-15 13:52:22.032770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.032797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:22.032865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.032879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:22.032928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.032941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:22.032995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.033009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.032 #22 NEW cov: 12078 ft: 14245 corp: 19/153b lim: 10 exec/s: 22 rss: 73Mb L: 8/10 MS: 1 EraseBytes- 00:06:44.032 [2024-07-15 13:52:22.072937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.072962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:22.073018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.073031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:22.073085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.073099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.032 [2024-07-15 13:52:22.073150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.032 [2024-07-15 13:52:22.073163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.291 #23 NEW cov: 12078 ft: 14272 corp: 20/162b lim: 10 exec/s: 23 rss: 73Mb L: 9/10 MS: 1 EraseBytes- 00:06:44.291 [2024-07-15 13:52:22.123207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.123236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.291 [2024-07-15 13:52:22.123291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.123304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.291 [2024-07-15 13:52:22.123368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffef cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.123381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.291 [2024-07-15 13:52:22.123435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.123448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.291 [2024-07-15 13:52:22.123502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.123515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.291 #24 NEW cov: 12078 ft: 14285 corp: 21/172b lim: 10 exec/s: 24 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:44.291 [2024-07-15 13:52:22.163157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.163182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.291 [2024-07-15 13:52:22.163241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.163254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.291 [2024-07-15 13:52:22.163307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.291 [2024-07-15 13:52:22.163320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.163372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.163385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.292 #25 NEW cov: 12078 ft: 14307 corp: 22/181b lim: 10 exec/s: 25 rss: 73Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:44.292 [2024-07-15 13:52:22.203455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.203481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.203534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.203548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.203602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffef cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.203616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.203669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.203682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.203735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.203748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.292 #26 NEW cov: 12078 ft: 14371 corp: 23/191b lim: 10 exec/s: 26 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:06:44.292 [2024-07-15 13:52:22.243308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.243332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.243401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.243415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.243467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.243480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.292 #27 NEW cov: 12078 ft: 14527 corp: 24/198b lim: 10 exec/s: 27 rss: 73Mb L: 7/10 MS: 1 EraseBytes- 00:06:44.292 [2024-07-15 13:52:22.293654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.293681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.293732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.293746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.293795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.293808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.293863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.293875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.293926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000580a cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.293939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.292 #28 NEW cov: 12078 ft: 14540 corp: 25/208b lim: 10 exec/s: 28 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:44.292 [2024-07-15 13:52:22.333624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.333651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.333704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.333717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.333770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.333783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.292 [2024-07-15 13:52:22.333836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.292 [2024-07-15 13:52:22.333849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.292 #29 NEW cov: 12078 ft: 14565 corp: 26/217b lim: 10 exec/s: 29 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:06:44.551 [2024-07-15 13:52:22.373880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.373905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.373959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.373972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.374024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.374037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.374089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.374102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.374158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.374171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.551 #30 NEW cov: 12078 ft: 14590 corp: 27/227b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:44.551 [2024-07-15 13:52:22.413759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.413784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.413834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.413847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.413897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.413910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.551 #31 NEW cov: 12078 ft: 14600 corp: 28/234b lim: 10 exec/s: 31 rss: 73Mb L: 7/10 MS: 1 EraseBytes- 00:06:44.551 [2024-07-15 13:52:22.453996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.454019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.454083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.454097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.454148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.454161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.454210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.454228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.551 #32 NEW cov: 12078 ft: 14625 corp: 29/243b lim: 10 exec/s: 32 rss: 73Mb L: 9/10 MS: 1 CopyPart- 00:06:44.551 [2024-07-15 13:52:22.493720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001e00 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.493744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.551 #35 NEW cov: 12078 ft: 14693 corp: 30/245b lim: 10 exec/s: 35 rss: 73Mb L: 2/10 MS: 3 CrossOver-ShuffleBytes-InsertByte- 00:06:44.551 [2024-07-15 13:52:22.544358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.544383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.544434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.544448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.544496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.544512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.551 [2024-07-15 13:52:22.544561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.544574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.551 #36 NEW cov: 12078 ft: 14703 corp: 31/253b lim: 10 exec/s: 36 rss: 73Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:44.551 [2024-07-15 13:52:22.594561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.551 [2024-07-15 13:52:22.594585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.552 [2024-07-15 13:52:22.594636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007858 cdw11:00000000 00:06:44.552 [2024-07-15 13:52:22.594649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.552 [2024-07-15 13:52:22.594700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.552 [2024-07-15 13:52:22.594714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.552 [2024-07-15 13:52:22.594762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:44.552 [2024-07-15 13:52:22.594776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.552 [2024-07-15 13:52:22.594825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000580a cdw11:00000000 00:06:44.552 [2024-07-15 13:52:22.594837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.552 #37 NEW cov: 12078 ft: 14711 corp: 32/263b lim: 10 exec/s: 37 rss: 73Mb L: 10/10 MS: 1 ChangeBit- 00:06:44.811 [2024-07-15 13:52:22.634502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.634526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.634576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.634590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.634639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000059ff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.634653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.634700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.634713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.811 #43 NEW cov: 12078 ft: 14741 corp: 33/271b lim: 10 exec/s: 43 rss: 73Mb L: 8/10 MS: 1 ChangeByte- 00:06:44.811 [2024-07-15 13:52:22.674659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.674684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.674736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.674750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.674803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eff7 cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.674816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.674865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.674878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.811 #44 NEW cov: 12078 ft: 14759 corp: 34/280b lim: 10 exec/s: 44 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:06:44.811 [2024-07-15 13:52:22.724772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.724797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.724864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.724878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.724927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.724941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.724990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.725002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.811 #45 NEW cov: 12078 ft: 14766 corp: 35/289b lim: 10 exec/s: 45 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:06:44.811 [2024-07-15 13:52:22.774898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.774922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.774987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.775001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.775052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.775066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.775116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.775129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.811 #46 NEW cov: 12078 ft: 14793 corp: 36/298b lim: 10 exec/s: 46 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:06:44.811 [2024-07-15 13:52:22.815119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.815144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.815195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffef cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.815208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.815263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.815280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.815331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.815344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.815393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000058ef cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.815406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.811 #47 NEW cov: 12078 ft: 14806 corp: 37/308b lim: 10 exec/s: 47 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:44.811 [2024-07-15 13:52:22.865198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.865230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.865281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.865294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.865344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000efff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.865358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.811 [2024-07-15 13:52:22.865406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.811 [2024-07-15 13:52:22.865419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.071 #48 NEW cov: 12078 ft: 14812 corp: 38/317b lim: 10 exec/s: 48 rss: 73Mb L: 9/10 MS: 1 CopyPart- 00:06:45.071 [2024-07-15 13:52:22.915465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005858 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.915490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.915557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005858 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.915570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.915620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000058f4 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.915633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.915681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005858 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.915694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.915744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000580a cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.915757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.071 #49 NEW cov: 12078 ft: 14821 corp: 39/327b lim: 10 exec/s: 49 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:45.071 [2024-07-15 13:52:22.965608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.965636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.965687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.965701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.965750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.965764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.965814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008000 cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.965827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.071 [2024-07-15 13:52:22.965876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:45.071 [2024-07-15 13:52:22.965889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.071 #50 NEW cov: 12078 ft: 14829 corp: 40/337b lim: 10 exec/s: 25 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:06:45.071 #50 DONE cov: 12078 ft: 14829 corp: 40/337b lim: 10 exec/s: 25 rss: 74Mb 00:06:45.071 ###### Recommended dictionary. ###### 00:06:45.071 "\000\000\000\000\000\000\000\000" # Uses: 2 00:06:45.071 ###### End of recommended dictionary. ###### 00:06:45.071 Done 50 runs in 2 second(s) 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.071 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:45.330 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:45.330 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:45.330 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:45.330 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.330 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.330 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.330 13:52:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:45.330 [2024-07-15 13:52:23.181568] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:45.330 [2024-07-15 13:52:23.181644] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844668 ] 00:06:45.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.589 [2024-07-15 13:52:23.402816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.589 [2024-07-15 13:52:23.473740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.589 [2024-07-15 13:52:23.533490] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.589 [2024-07-15 13:52:23.549769] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:45.589 INFO: Running with entropic power schedule (0xFF, 100). 00:06:45.589 INFO: Seed: 1159013708 00:06:45.589 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:45.589 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:45.589 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:45.589 INFO: A corpus is not provided, starting from an empty corpus 00:06:45.589 [2024-07-15 13:52:23.615018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.589 [2024-07-15 13:52:23.615046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.589 #2 INITED cov: 11860 ft: 11861 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:45.589 [2024-07-15 13:52:23.655056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.589 [2024-07-15 13:52:23.655081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.848 #3 NEW cov: 11992 ft: 12521 corp: 2/2b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:45.848 [2024-07-15 13:52:23.705152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.705177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.848 #4 NEW cov: 11998 ft: 12717 corp: 3/3b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ChangeBit- 00:06:45.848 [2024-07-15 13:52:23.755300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.755324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.848 #5 NEW cov: 12083 ft: 13034 corp: 4/4b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ChangeBit- 00:06:45.848 [2024-07-15 13:52:23.796011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.796034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.796089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.796102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.796154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.796167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.796222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.796238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.796293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.796306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.848 #6 NEW cov: 12083 ft: 14020 corp: 5/9b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:45.848 [2024-07-15 13:52:23.846116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.846140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.846194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.846207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.846278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.846292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.846345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.846358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.846410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.846424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.848 #7 NEW cov: 12083 ft: 14079 corp: 6/14b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:06:45.848 [2024-07-15 13:52:23.896112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.896136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.896206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.896224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.896279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.848 [2024-07-15 13:52:23.896292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.848 [2024-07-15 13:52:23.896345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.849 [2024-07-15 13:52:23.896368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.107 #8 NEW cov: 12083 ft: 14118 corp: 7/18b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:46.107 [2024-07-15 13:52:23.945805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:23.945831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.107 #9 NEW cov: 12083 ft: 14168 corp: 8/19b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:46.107 [2024-07-15 13:52:23.985943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:23.985967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.107 #10 NEW cov: 12083 ft: 14189 corp: 9/20b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:46.107 [2024-07-15 13:52:24.026574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.026599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.026668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.026682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.026733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.026747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.026800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.026813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.026866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.026879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.107 #11 NEW cov: 12083 ft: 14237 corp: 10/25b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:46.107 [2024-07-15 13:52:24.076741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.076765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.076834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.076847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.076900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.076913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.076967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.076980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.077036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.077049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.107 #12 NEW cov: 12083 ft: 14253 corp: 11/30b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:46.107 [2024-07-15 13:52:24.116879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.116903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.116957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.116971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.117025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.117038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.117088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.117102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.107 [2024-07-15 13:52:24.117155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.117169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.107 #13 NEW cov: 12083 ft: 14287 corp: 12/35b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:46.107 [2024-07-15 13:52:24.166396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.107 [2024-07-15 13:52:24.166420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.366 #14 NEW cov: 12083 ft: 14337 corp: 13/36b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CopyPart- 00:06:46.366 [2024-07-15 13:52:24.216556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.216582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.366 #15 NEW cov: 12083 ft: 14401 corp: 14/37b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:46.366 [2024-07-15 13:52:24.256770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.256797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.366 [2024-07-15 13:52:24.256851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.256864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.366 #16 NEW cov: 12083 ft: 14580 corp: 15/39b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:46.366 [2024-07-15 13:52:24.296913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.296943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.366 [2024-07-15 13:52:24.296997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.297010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.366 #17 NEW cov: 12083 ft: 14639 corp: 16/41b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:46.366 [2024-07-15 13:52:24.346982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.347009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.366 [2024-07-15 13:52:24.347063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.347076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.366 #18 NEW cov: 12083 ft: 14657 corp: 17/43b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:46.366 [2024-07-15 13:52:24.387001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.387027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.366 #19 NEW cov: 12083 ft: 14668 corp: 18/44b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:46.366 [2024-07-15 13:52:24.427514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.427540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.366 [2024-07-15 13:52:24.427597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.427612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.366 [2024-07-15 13:52:24.427662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.427676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.366 [2024-07-15 13:52:24.427727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.366 [2024-07-15 13:52:24.427739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.625 #20 NEW cov: 12083 ft: 14674 corp: 19/48b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:06:46.625 [2024-07-15 13:52:24.467365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.625 [2024-07-15 13:52:24.467390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.625 [2024-07-15 13:52:24.467444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.625 [2024-07-15 13:52:24.467461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.884 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:46.884 #21 NEW cov: 12106 ft: 14758 corp: 20/50b lim: 5 exec/s: 21 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:06:46.884 [2024-07-15 13:52:24.798390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.798453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.884 #22 NEW cov: 12106 ft: 14885 corp: 21/51b lim: 5 exec/s: 22 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:06:46.884 [2024-07-15 13:52:24.858452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.858479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.884 [2024-07-15 13:52:24.858534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.858548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.884 #23 NEW cov: 12106 ft: 14900 corp: 22/53b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:46.884 [2024-07-15 13:52:24.909006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.909030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.884 [2024-07-15 13:52:24.909104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.909119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.884 [2024-07-15 13:52:24.909173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.909186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.884 [2024-07-15 13:52:24.909244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.909258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.884 [2024-07-15 13:52:24.909323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.884 [2024-07-15 13:52:24.909336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.884 #24 NEW cov: 12106 ft: 14921 corp: 23/58b lim: 5 exec/s: 24 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:47.143 [2024-07-15 13:52:24.958714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:24.958740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:24.958797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:24.958811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.143 #25 NEW cov: 12106 ft: 14957 corp: 24/60b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:47.143 [2024-07-15 13:52:25.009004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.009028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.009084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.009098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.009153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.009166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.143 #26 NEW cov: 12106 ft: 15122 corp: 25/63b lim: 5 exec/s: 26 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:06:47.143 [2024-07-15 13:52:25.059094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.059119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.059191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.059204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.059260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.059275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.143 #27 NEW cov: 12106 ft: 15128 corp: 26/66b lim: 5 exec/s: 27 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:06:47.143 [2024-07-15 13:52:25.098887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.098911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.143 #28 NEW cov: 12106 ft: 15169 corp: 27/67b lim: 5 exec/s: 28 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:06:47.143 [2024-07-15 13:52:25.139635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.139662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.139722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.139738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.139796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.139810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.139866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.139883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.139938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.139953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.143 #29 NEW cov: 12106 ft: 15212 corp: 28/72b lim: 5 exec/s: 29 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:47.143 [2024-07-15 13:52:25.179763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.179788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.179861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.179875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.179930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.179944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.179999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.180013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.143 [2024-07-15 13:52:25.180066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.143 [2024-07-15 13:52:25.180080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.403 #30 NEW cov: 12106 ft: 15226 corp: 29/77b lim: 5 exec/s: 30 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:47.403 [2024-07-15 13:52:25.229657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.229682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.229737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.229751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.229806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.229819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.403 #31 NEW cov: 12106 ft: 15233 corp: 30/80b lim: 5 exec/s: 31 rss: 74Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:47.403 [2024-07-15 13:52:25.280034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.280059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.280133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.280151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.280204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.280223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.280279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.280292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.280359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.280372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.403 #32 NEW cov: 12106 ft: 15302 corp: 31/85b lim: 5 exec/s: 32 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:47.403 [2024-07-15 13:52:25.330191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.330215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.330292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.330306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.330361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.330374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.330431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.330446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.330500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.330513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.403 #33 NEW cov: 12106 ft: 15306 corp: 32/90b lim: 5 exec/s: 33 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:47.403 [2024-07-15 13:52:25.380199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.380228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.380286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.380300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.380357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.380373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.380429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.380442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.403 #34 NEW cov: 12106 ft: 15312 corp: 33/94b lim: 5 exec/s: 34 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:06:47.403 [2024-07-15 13:52:25.420260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.420286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.420358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.420372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.420427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.420441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.420497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.420512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.403 #35 NEW cov: 12106 ft: 15319 corp: 34/98b lim: 5 exec/s: 35 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:47.403 [2024-07-15 13:52:25.460569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.460593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.460663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.460677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.460734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.460747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.460805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.460819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.403 [2024-07-15 13:52:25.460874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.403 [2024-07-15 13:52:25.460887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.663 #36 NEW cov: 12106 ft: 15332 corp: 35/103b lim: 5 exec/s: 36 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:06:47.663 [2024-07-15 13:52:25.500522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.500547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.500619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.500634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.500690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.500704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.500758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.500772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.663 #37 NEW cov: 12106 ft: 15340 corp: 36/107b lim: 5 exec/s: 37 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:47.663 [2024-07-15 13:52:25.540799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.540824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.540881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.540894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.540949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.540963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.541015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.541028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.541084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.541097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.663 #38 NEW cov: 12106 ft: 15346 corp: 37/112b lim: 5 exec/s: 38 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:47.663 [2024-07-15 13:52:25.580899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.580923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.580998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.581013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.581069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.581084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.581139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.581153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.663 [2024-07-15 13:52:25.581209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.663 [2024-07-15 13:52:25.581227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.663 #39 NEW cov: 12106 ft: 15353 corp: 38/117b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:47.663 #39 DONE cov: 12106 ft: 15353 corp: 38/117b lim: 5 exec/s: 19 rss: 74Mb 00:06:47.663 Done 39 runs in 2 second(s) 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:47.943 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:47.944 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:47.944 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:47.944 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:47.944 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:47.944 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:47.944 13:52:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:47.944 [2024-07-15 13:52:25.797236] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:47.944 [2024-07-15 13:52:25.797308] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844968 ] 00:06:47.944 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.212 [2024-07-15 13:52:26.014252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.213 [2024-07-15 13:52:26.086290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.213 [2024-07-15 13:52:26.145608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.213 [2024-07-15 13:52:26.161904] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:48.213 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.213 INFO: Seed: 3771012720 00:06:48.213 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:48.213 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:48.213 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:48.213 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.213 [2024-07-15 13:52:26.227258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.213 [2024-07-15 13:52:26.227288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.213 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:48.213 [2024-07-15 13:52:26.267211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.213 [2024-07-15 13:52:26.267240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.483 #3 NEW cov: 11992 ft: 12262 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeByte- 00:06:48.483 [2024-07-15 13:52:26.317977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.318002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.483 [2024-07-15 13:52:26.318074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.318089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.483 [2024-07-15 13:52:26.318144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.318158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.483 [2024-07-15 13:52:26.318211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.318228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.483 [2024-07-15 13:52:26.318282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.318296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.483 #4 NEW cov: 11998 ft: 13454 corp: 3/7b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:48.483 [2024-07-15 13:52:26.367637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.367664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.483 [2024-07-15 13:52:26.367717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.367732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.483 #5 NEW cov: 12083 ft: 13876 corp: 4/9b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:06:48.483 [2024-07-15 13:52:26.407914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.407939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.483 [2024-07-15 13:52:26.408012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.408027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.483 [2024-07-15 13:52:26.408080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.483 [2024-07-15 13:52:26.408093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.483 #6 NEW cov: 12083 ft: 14122 corp: 5/12b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 InsertByte- 00:06:48.483 [2024-07-15 13:52:26.457885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.457910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.484 [2024-07-15 13:52:26.457984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.457998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.484 #7 NEW cov: 12083 ft: 14166 corp: 6/14b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:06:48.484 [2024-07-15 13:52:26.498538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.498563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.484 [2024-07-15 13:52:26.498617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.498631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.484 [2024-07-15 13:52:26.498686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.498699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.484 [2024-07-15 13:52:26.498753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.498766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.484 [2024-07-15 13:52:26.498819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.498832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.484 #8 NEW cov: 12083 ft: 14293 corp: 7/19b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:06:48.484 [2024-07-15 13:52:26.548005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.484 [2024-07-15 13:52:26.548033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.742 #9 NEW cov: 12083 ft: 14324 corp: 8/20b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:06:48.742 [2024-07-15 13:52:26.598790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.742 [2024-07-15 13:52:26.598815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.742 [2024-07-15 13:52:26.598873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.742 [2024-07-15 13:52:26.598886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.742 [2024-07-15 13:52:26.598940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.742 [2024-07-15 13:52:26.598953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.599006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.599019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.599073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.599086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.743 #10 NEW cov: 12083 ft: 14346 corp: 9/25b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:48.743 [2024-07-15 13:52:26.638872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.638897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.638966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.638980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.639034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.639047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.639104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.639117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.639172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.639186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.743 #11 NEW cov: 12083 ft: 14438 corp: 10/30b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:06:48.743 [2024-07-15 13:52:26.689007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.689034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.689092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.689106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.689158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.689171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.689227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.689240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.689294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.689307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.743 #12 NEW cov: 12083 ft: 14533 corp: 11/35b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CrossOver- 00:06:48.743 [2024-07-15 13:52:26.738539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.738563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.743 #13 NEW cov: 12083 ft: 14589 corp: 12/36b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:06:48.743 [2024-07-15 13:52:26.778816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.778841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.743 [2024-07-15 13:52:26.778893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.743 [2024-07-15 13:52:26.778907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.002 #14 NEW cov: 12083 ft: 14670 corp: 13/38b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:06:49.002 [2024-07-15 13:52:26.828784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.828809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.002 #15 NEW cov: 12083 ft: 14686 corp: 14/39b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:06:49.002 [2024-07-15 13:52:26.879060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.879084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:26.879154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.879168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.002 #16 NEW cov: 12083 ft: 14703 corp: 15/41b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeByte- 00:06:49.002 [2024-07-15 13:52:26.919655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.919679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:26.919733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.919747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:26.919801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.919815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:26.919867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.919880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:26.919932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.919945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.002 #17 NEW cov: 12083 ft: 14745 corp: 16/46b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:49.002 [2024-07-15 13:52:26.959487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.959512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:26.959580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.959594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:26.959647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:26.959661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.002 #18 NEW cov: 12083 ft: 14765 corp: 17/49b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 ChangeBit- 00:06:49.002 [2024-07-15 13:52:27.009446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:27.009471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:27.009527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:27.009541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.002 #19 NEW cov: 12083 ft: 14871 corp: 18/51b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeBit- 00:06:49.002 [2024-07-15 13:52:27.050017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:27.050047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:27.050100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:27.050113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:27.050168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:27.050197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:27.050253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:27.050266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.002 [2024-07-15 13:52:27.050317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.002 [2024-07-15 13:52:27.050331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.002 #20 NEW cov: 12083 ft: 14883 corp: 19/56b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:06:49.261 [2024-07-15 13:52:27.090168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.261 [2024-07-15 13:52:27.090194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.261 [2024-07-15 13:52:27.090248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.261 [2024-07-15 13:52:27.090263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.261 [2024-07-15 13:52:27.090317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.261 [2024-07-15 13:52:27.090331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.261 [2024-07-15 13:52:27.090388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.261 [2024-07-15 13:52:27.090402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.261 [2024-07-15 13:52:27.090456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.261 [2024-07-15 13:52:27.090470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.520 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:49.520 #21 NEW cov: 12106 ft: 14923 corp: 20/61b lim: 5 exec/s: 21 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:49.521 [2024-07-15 13:52:27.420791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.420851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.521 [2024-07-15 13:52:27.420944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.420971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.521 #22 NEW cov: 12106 ft: 15104 corp: 21/63b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:49.521 [2024-07-15 13:52:27.470442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.470469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.521 #23 NEW cov: 12106 ft: 15142 corp: 22/64b lim: 5 exec/s: 23 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:49.521 [2024-07-15 13:52:27.510848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.510873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.521 [2024-07-15 13:52:27.510944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.510957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.521 [2024-07-15 13:52:27.511012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.511025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.521 #24 NEW cov: 12106 ft: 15153 corp: 23/67b lim: 5 exec/s: 24 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:06:49.521 [2024-07-15 13:52:27.550804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.550829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.521 [2024-07-15 13:52:27.550900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.521 [2024-07-15 13:52:27.550914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.521 #25 NEW cov: 12106 ft: 15163 corp: 24/69b lim: 5 exec/s: 25 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:49.790 [2024-07-15 13:52:27.601415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.601442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.601497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.601510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.601564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.601577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.601630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.601646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.601700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.601713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.790 #26 NEW cov: 12106 ft: 15169 corp: 25/74b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:49.790 [2024-07-15 13:52:27.641377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.641402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.641456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.641469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.641523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.641536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.641590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.641603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.790 #27 NEW cov: 12106 ft: 15183 corp: 26/78b lim: 5 exec/s: 27 rss: 73Mb L: 4/5 MS: 1 EraseBytes- 00:06:49.790 [2024-07-15 13:52:27.691201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.691230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.691300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.691314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.790 #28 NEW cov: 12106 ft: 15191 corp: 27/80b lim: 5 exec/s: 28 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:06:49.790 [2024-07-15 13:52:27.731615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.731639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.731711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.731725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.731778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.731792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.731843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.731860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.790 #29 NEW cov: 12106 ft: 15220 corp: 28/84b lim: 5 exec/s: 29 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:49.790 [2024-07-15 13:52:27.781756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.781780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.781852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.781867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.781921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.781933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.781988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.782002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.790 #30 NEW cov: 12106 ft: 15229 corp: 29/88b lim: 5 exec/s: 30 rss: 73Mb L: 4/5 MS: 1 EraseBytes- 00:06:49.790 [2024-07-15 13:52:27.831654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.831679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.790 [2024-07-15 13:52:27.831734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.790 [2024-07-15 13:52:27.831747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.790 #31 NEW cov: 12106 ft: 15240 corp: 30/90b lim: 5 exec/s: 31 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:06:50.049 [2024-07-15 13:52:27.872187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.872211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.872295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.872309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.872365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.872378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.872431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.872445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.872498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.872514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.049 #32 NEW cov: 12106 ft: 15248 corp: 31/95b lim: 5 exec/s: 32 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:50.049 [2024-07-15 13:52:27.912110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.912135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.912205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.912223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.912277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.912301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.912355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.912367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.049 #33 NEW cov: 12106 ft: 15269 corp: 32/99b lim: 5 exec/s: 33 rss: 73Mb L: 4/5 MS: 1 CrossOver- 00:06:50.049 [2024-07-15 13:52:27.962427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.962451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.962524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.962538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.962592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.962606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.962661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.962676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:27.962729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:27.962742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.049 #34 NEW cov: 12106 ft: 15288 corp: 33/104b lim: 5 exec/s: 34 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:50.049 [2024-07-15 13:52:28.012551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:28.012576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:28.012635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:28.012648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:28.012702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:28.012715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:28.012767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:28.012780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.049 [2024-07-15 13:52:28.012832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.049 [2024-07-15 13:52:28.012845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.050 #35 NEW cov: 12106 ft: 15291 corp: 34/109b lim: 5 exec/s: 35 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:50.050 [2024-07-15 13:52:28.052533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.050 [2024-07-15 13:52:28.052557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.050 [2024-07-15 13:52:28.052626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.050 [2024-07-15 13:52:28.052640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.050 [2024-07-15 13:52:28.052694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.050 [2024-07-15 13:52:28.052708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.050 [2024-07-15 13:52:28.052761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.050 [2024-07-15 13:52:28.052774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.050 #36 NEW cov: 12106 ft: 15304 corp: 35/113b lim: 5 exec/s: 36 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:50.050 [2024-07-15 13:52:28.102388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.050 [2024-07-15 13:52:28.102412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.050 [2024-07-15 13:52:28.102468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.050 [2024-07-15 13:52:28.102481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.310 #37 NEW cov: 12106 ft: 15329 corp: 36/115b lim: 5 exec/s: 37 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:50.310 [2024-07-15 13:52:28.153010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.153034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.153109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.153123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.153179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.153192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.153249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.153263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.153327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.153341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.310 #38 NEW cov: 12106 ft: 15359 corp: 37/120b lim: 5 exec/s: 38 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:50.310 [2024-07-15 13:52:28.203103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.203128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.203184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.203197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.203269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.203283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.203337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.203350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.310 [2024-07-15 13:52:28.203403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.310 [2024-07-15 13:52:28.203416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.310 #39 NEW cov: 12106 ft: 15379 corp: 38/125b lim: 5 exec/s: 19 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:50.310 #39 DONE cov: 12106 ft: 15379 corp: 38/125b lim: 5 exec/s: 19 rss: 73Mb 00:06:50.310 Done 39 runs in 2 second(s) 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:50.310 13:52:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:50.569 [2024-07-15 13:52:28.400984] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:50.569 [2024-07-15 13:52:28.401053] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845272 ] 00:06:50.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.569 [2024-07-15 13:52:28.617734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.828 [2024-07-15 13:52:28.689828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.828 [2024-07-15 13:52:28.749479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.828 [2024-07-15 13:52:28.765766] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:50.828 INFO: Running with entropic power schedule (0xFF, 100). 00:06:50.828 INFO: Seed: 2078053278 00:06:50.828 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:50.828 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:50.828 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:50.828 INFO: A corpus is not provided, starting from an empty corpus 00:06:50.828 #2 INITED exec/s: 0 rss: 64Mb 00:06:50.828 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:50.828 This may also happen if the target rejected all inputs we tried so far 00:06:50.828 [2024-07-15 13:52:28.824951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.828 [2024-07-15 13:52:28.824981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.828 [2024-07-15 13:52:28.825040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.828 [2024-07-15 13:52:28.825053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.828 [2024-07-15 13:52:28.825110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.828 [2024-07-15 13:52:28.825128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.087 NEW_FUNC[1/693]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:51.087 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.087 #4 NEW cov: 11883 ft: 11885 corp: 2/31b lim: 40 exec/s: 0 rss: 72Mb L: 30/30 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:51.346 [2024-07-15 13:52:29.166268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.166329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.166419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.166446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.166532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.166557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.166643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.166668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.346 NEW_FUNC[1/2]: 0xf46dc0 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:297 00:06:51.346 NEW_FUNC[2/2]: 0xf46e20 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:06:51.346 #5 NEW cov: 12015 ft: 12977 corp: 3/70b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:51.346 [2024-07-15 13:52:29.236110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.236137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.236197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.236210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.236290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.236304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.236363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.236377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.346 #6 NEW cov: 12021 ft: 13305 corp: 4/109b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ChangeBinInt- 00:06:51.346 [2024-07-15 13:52:29.286354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.286381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.286456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.286470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.286528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.286541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.286597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.286610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.286667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.286680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.346 #7 NEW cov: 12106 ft: 13588 corp: 5/149b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 InsertByte- 00:06:51.346 [2024-07-15 13:52:29.336221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.336246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.336321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.336335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.336396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.336409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.346 #8 NEW cov: 12106 ft: 13649 corp: 6/179b lim: 40 exec/s: 0 rss: 72Mb L: 30/40 MS: 1 ShuffleBytes- 00:06:51.346 [2024-07-15 13:52:29.376117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:011311a8 cdw11:c49b51b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.376141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.346 #10 NEW cov: 12106 ft: 14108 corp: 7/188b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 2 ChangeBit-CMP- DE: "\001\023\021\250\304\233Q\270"- 00:06:51.346 [2024-07-15 13:52:29.416488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.416513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.416575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.416589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.346 [2024-07-15 13:52:29.416648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.346 [2024-07-15 13:52:29.416664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.606 #11 NEW cov: 12106 ft: 14209 corp: 8/218b lim: 40 exec/s: 0 rss: 72Mb L: 30/40 MS: 1 CopyPart- 00:06:51.606 [2024-07-15 13:52:29.456311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:011311a8 cdw11:c41b51b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.456336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.606 #12 NEW cov: 12106 ft: 14244 corp: 9/227b lim: 40 exec/s: 0 rss: 73Mb L: 9/40 MS: 1 ChangeBit- 00:06:51.606 [2024-07-15 13:52:29.506459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.506485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.606 #13 NEW cov: 12106 ft: 14321 corp: 10/240b lim: 40 exec/s: 0 rss: 73Mb L: 13/40 MS: 1 InsertRepeatedBytes- 00:06:51.606 [2024-07-15 13:52:29.547089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.547114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.547176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.547189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.547248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.547262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.547317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.547330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.547389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.547402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.606 #14 NEW cov: 12106 ft: 14397 corp: 11/280b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeBit- 00:06:51.606 [2024-07-15 13:52:29.596967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.596991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.597052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.597065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.597124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.597137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.606 #15 NEW cov: 12106 ft: 14447 corp: 12/310b lim: 40 exec/s: 0 rss: 73Mb L: 30/40 MS: 1 CopyPart- 00:06:51.606 [2024-07-15 13:52:29.637332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.637357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.637418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.637432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.637492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.637505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.637562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.637576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.606 [2024-07-15 13:52:29.637634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.606 [2024-07-15 13:52:29.637649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.606 #16 NEW cov: 12106 ft: 14537 corp: 13/350b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:06:51.866 [2024-07-15 13:52:29.687374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.866 [2024-07-15 13:52:29.687399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.866 [2024-07-15 13:52:29.687475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:011311a8 cdw11:c49b51b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.866 [2024-07-15 13:52:29.687488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.866 [2024-07-15 13:52:29.687549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.866 [2024-07-15 13:52:29.687563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.866 [2024-07-15 13:52:29.687621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.866 [2024-07-15 13:52:29.687634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.866 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:51.866 #17 NEW cov: 12129 ft: 14660 corp: 14/389b lim: 40 exec/s: 0 rss: 73Mb L: 39/40 MS: 1 PersAutoDict- DE: "\001\023\021\250\304\233Q\270"- 00:06:51.866 [2024-07-15 13:52:29.727600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.866 [2024-07-15 13:52:29.727626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.866 [2024-07-15 13:52:29.727688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:24ff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.866 [2024-07-15 13:52:29.727706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.866 [2024-07-15 13:52:29.727763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ff24ff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.866 [2024-07-15 13:52:29.727777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.866 [2024-07-15 13:52:29.727839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.727853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.727912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.727926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.867 #18 NEW cov: 12129 ft: 14696 corp: 15/429b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:06:51.867 [2024-07-15 13:52:29.777471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.777497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.777558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000dd cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.777572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.777631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.777645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.867 #19 NEW cov: 12129 ft: 14725 corp: 16/459b lim: 40 exec/s: 19 rss: 73Mb L: 30/40 MS: 1 ChangeByte- 00:06:51.867 [2024-07-15 13:52:29.827867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.827894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.827955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.827970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.828027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.828041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.828101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.828114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.828175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00fa000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.828192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.867 #20 NEW cov: 12129 ft: 14756 corp: 17/499b lim: 40 exec/s: 20 rss: 73Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:51.867 [2024-07-15 13:52:29.867709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:18000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.867734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.867809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.867823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.867883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.867897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.867 #21 NEW cov: 12129 ft: 14795 corp: 18/529b lim: 40 exec/s: 21 rss: 73Mb L: 30/40 MS: 1 ChangeByte- 00:06:51.867 [2024-07-15 13:52:29.907961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.907986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.908062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.908076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.908136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00002800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.908150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.867 [2024-07-15 13:52:29.908208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.867 [2024-07-15 13:52:29.908227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.867 #22 NEW cov: 12129 ft: 14819 corp: 19/568b lim: 40 exec/s: 22 rss: 73Mb L: 39/40 MS: 1 ChangeByte- 00:06:52.126 [2024-07-15 13:52:29.948237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.126 [2024-07-15 13:52:29.948262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.126 [2024-07-15 13:52:29.948324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.126 [2024-07-15 13:52:29.948337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.126 [2024-07-15 13:52:29.948396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00002800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.126 [2024-07-15 13:52:29.948410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.126 [2024-07-15 13:52:29.948470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.126 [2024-07-15 13:52:29.948486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.126 [2024-07-15 13:52:29.948546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.126 [2024-07-15 13:52:29.948559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.126 #23 NEW cov: 12129 ft: 14868 corp: 20/608b lim: 40 exec/s: 23 rss: 73Mb L: 40/40 MS: 1 InsertByte- 00:06:52.126 [2024-07-15 13:52:29.998245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.126 [2024-07-15 13:52:29.998271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:29.998349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:29.998364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:29.998426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00002800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:29.998439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:29.998498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:1311a8c4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:29.998512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.127 #24 NEW cov: 12129 ft: 14939 corp: 21/647b lim: 40 exec/s: 24 rss: 73Mb L: 39/40 MS: 1 PersAutoDict- DE: "\001\023\021\250\304\233Q\270"- 00:06:52.127 [2024-07-15 13:52:30.038013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01011311 cdw11:a8c49b51 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.038046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.127 #25 NEW cov: 12129 ft: 14959 corp: 22/656b lim: 40 exec/s: 25 rss: 73Mb L: 9/40 MS: 1 PersAutoDict- DE: "\001\023\021\250\304\233Q\270"- 00:06:52.127 [2024-07-15 13:52:30.078591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.078623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.078686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:24ff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.078700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.078763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ff24ff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.078777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.078836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.078849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.078915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.078929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.127 #26 NEW cov: 12129 ft: 14995 corp: 23/696b lim: 40 exec/s: 26 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:06:52.127 [2024-07-15 13:52:30.128617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.128645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.128706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.128720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.128777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00002800 cdw11:0000ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.128790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.128848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:0000a8c4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.128860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.127 #27 NEW cov: 12129 ft: 15011 corp: 24/735b lim: 40 exec/s: 27 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:06:52.127 [2024-07-15 13:52:30.178615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.178641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.178716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000dd cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.178730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.127 [2024-07-15 13:52:30.178789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.127 [2024-07-15 13:52:30.178802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.386 #28 NEW cov: 12129 ft: 15031 corp: 25/765b lim: 40 exec/s: 28 rss: 73Mb L: 30/40 MS: 1 ShuffleBytes- 00:06:52.386 [2024-07-15 13:52:30.229026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.229050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.229111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:24ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.229125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.229198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.229215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.229280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.229293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.229354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.229368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.386 #29 NEW cov: 12129 ft: 15040 corp: 26/805b lim: 40 exec/s: 29 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:52.386 [2024-07-15 13:52:30.268594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:011311a8 cdw11:c49b51b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.268619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.386 #30 NEW cov: 12129 ft: 15054 corp: 27/814b lim: 40 exec/s: 30 rss: 73Mb L: 9/40 MS: 1 PersAutoDict- DE: "\001\023\021\250\304\233Q\270"- 00:06:52.386 [2024-07-15 13:52:30.308679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01011311 cdw11:e8c49b51 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.308704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.386 #31 NEW cov: 12129 ft: 15067 corp: 28/823b lim: 40 exec/s: 31 rss: 73Mb L: 9/40 MS: 1 ChangeBit- 00:06:52.386 [2024-07-15 13:52:30.358831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f8feecee cdw11:573b64ae SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.358855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.386 #32 NEW cov: 12129 ft: 15073 corp: 29/832b lim: 40 exec/s: 32 rss: 73Mb L: 9/40 MS: 1 ChangeBinInt- 00:06:52.386 [2024-07-15 13:52:30.399346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.399371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.399432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.399446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.399509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00002800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.399522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.399582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.399596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.386 #33 NEW cov: 12129 ft: 15091 corp: 30/871b lim: 40 exec/s: 33 rss: 73Mb L: 39/40 MS: 1 PersAutoDict- DE: "\001\023\021\250\304\233Q\270"- 00:06:52.386 [2024-07-15 13:52:30.439258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.439286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.386 [2024-07-15 13:52:30.439350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.386 [2024-07-15 13:52:30.439364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.646 #34 NEW cov: 12129 ft: 15269 corp: 31/887b lim: 40 exec/s: 34 rss: 73Mb L: 16/40 MS: 1 EraseBytes- 00:06:52.646 [2024-07-15 13:52:30.489476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:18000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.489501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.489565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.489579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.489656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.489670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.646 #35 NEW cov: 12129 ft: 15316 corp: 32/917b lim: 40 exec/s: 35 rss: 73Mb L: 30/40 MS: 1 ChangeBit- 00:06:52.646 [2024-07-15 13:52:30.539623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:18000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.539648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.539723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00d7810f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.539737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.539796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:4c1f7f00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.539810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.646 #36 NEW cov: 12129 ft: 15336 corp: 33/947b lim: 40 exec/s: 36 rss: 73Mb L: 30/40 MS: 1 CMP- DE: "\327\201\017L\037\177\000\000"- 00:06:52.646 [2024-07-15 13:52:30.579847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:18000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.579871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.579935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.579949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.580010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.580024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.580084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.580100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.646 #37 NEW cov: 12129 ft: 15342 corp: 34/982b lim: 40 exec/s: 37 rss: 73Mb L: 35/40 MS: 1 CopyPart- 00:06:52.646 [2024-07-15 13:52:30.629713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.629737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.629796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:011311a8 cdw11:c49b51b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.629810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.646 #38 NEW cov: 12129 ft: 15353 corp: 35/999b lim: 40 exec/s: 38 rss: 73Mb L: 17/40 MS: 1 InsertRepeatedBytes- 00:06:52.646 [2024-07-15 13:52:30.670181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00002f00 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.670206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.670269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.670283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.670343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.670356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.670413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.670426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.670484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.670497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.646 #39 NEW cov: 12129 ft: 15361 corp: 36/1039b lim: 40 exec/s: 39 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:06:52.646 [2024-07-15 13:52:30.709891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.709915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.646 [2024-07-15 13:52:30.709992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d7810f4c cdw11:1f7f0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.646 [2024-07-15 13:52:30.710006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.906 #40 NEW cov: 12129 ft: 15370 corp: 37/1055b lim: 40 exec/s: 40 rss: 73Mb L: 16/40 MS: 1 PersAutoDict- DE: "\327\201\017L\037\177\000\000"- 00:06:52.906 [2024-07-15 13:52:30.760345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.906 [2024-07-15 13:52:30.760370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.906 [2024-07-15 13:52:30.760429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.906 [2024-07-15 13:52:30.760443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.906 [2024-07-15 13:52:30.760518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.906 [2024-07-15 13:52:30.760532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.906 [2024-07-15 13:52:30.760589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.906 [2024-07-15 13:52:30.760603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.906 #41 NEW cov: 12129 ft: 15374 corp: 38/1094b lim: 40 exec/s: 41 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:06:52.906 [2024-07-15 13:52:30.810069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:04ffecee cdw11:a8c49b51 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.906 [2024-07-15 13:52:30.810094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.906 #42 NEW cov: 12129 ft: 15378 corp: 39/1103b lim: 40 exec/s: 21 rss: 74Mb L: 9/40 MS: 1 ChangeBinInt- 00:06:52.906 #42 DONE cov: 12129 ft: 15378 corp: 39/1103b lim: 40 exec/s: 21 rss: 74Mb 00:06:52.906 ###### Recommended dictionary. ###### 00:06:52.906 "\001\023\021\250\304\233Q\270" # Uses: 5 00:06:52.906 "\327\201\017L\037\177\000\000" # Uses: 1 00:06:52.906 ###### End of recommended dictionary. ###### 00:06:52.906 Done 42 runs in 2 second(s) 00:06:52.906 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.906 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.906 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.906 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:52.906 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:52.906 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:52.926 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.185 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.185 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.185 13:52:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:53.185 [2024-07-15 13:52:31.011479] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:53.185 [2024-07-15 13:52:31.011577] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845645 ] 00:06:53.185 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.185 [2024-07-15 13:52:31.226291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.444 [2024-07-15 13:52:31.299150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.444 [2024-07-15 13:52:31.359561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.444 [2024-07-15 13:52:31.375842] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:53.444 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.444 INFO: Seed: 395072385 00:06:53.444 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:53.444 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:53.444 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:53.444 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.444 #2 INITED exec/s: 0 rss: 65Mb 00:06:53.444 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.444 This may also happen if the target rejected all inputs we tried so far 00:06:53.444 [2024-07-15 13:52:31.441503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.444 [2024-07-15 13:52:31.441532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.444 [2024-07-15 13:52:31.441591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.444 [2024-07-15 13:52:31.441605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.444 [2024-07-15 13:52:31.441662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.444 [2024-07-15 13:52:31.441677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.703 NEW_FUNC[1/695]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:53.703 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:53.703 #23 NEW cov: 11888 ft: 11897 corp: 2/32b lim: 40 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:53.962 [2024-07-15 13:52:31.782762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d91919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.782823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.782924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:191919d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.782950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.783034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.783059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.783146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.783171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.962 NEW_FUNC[1/1]: 0x12f1d50 in nvmf_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/nvmf.c:150 00:06:53.962 #24 NEW cov: 12027 ft: 12832 corp: 3/68b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:53.962 [2024-07-15 13:52:31.842449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.842479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.842555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.842570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.842629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.842642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.962 #35 NEW cov: 12033 ft: 13118 corp: 4/99b lim: 40 exec/s: 0 rss: 72Mb L: 31/36 MS: 1 ShuffleBytes- 00:06:53.962 [2024-07-15 13:52:31.882192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.882223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.962 #37 NEW cov: 12118 ft: 14126 corp: 5/110b lim: 40 exec/s: 0 rss: 72Mb L: 11/36 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:53.962 [2024-07-15 13:52:31.922747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d91919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.922773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.922834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:191919d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.922847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.922903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:90d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.922917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.922974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.922987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.962 #38 NEW cov: 12118 ft: 14228 corp: 6/147b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertByte- 00:06:53.962 [2024-07-15 13:52:31.972785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.972810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.972867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.972883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.962 [2024-07-15 13:52:31.972943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:31.972956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.962 #39 NEW cov: 12118 ft: 14363 corp: 7/178b lim: 40 exec/s: 0 rss: 72Mb L: 31/37 MS: 1 ShuffleBytes- 00:06:53.962 [2024-07-15 13:52:32.012570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-07-15 13:52:32.012595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.222 #40 NEW cov: 12118 ft: 14426 corp: 8/191b lim: 40 exec/s: 0 rss: 73Mb L: 13/37 MS: 1 CopyPart- 00:06:54.222 [2024-07-15 13:52:32.063181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.063207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.063286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.063300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.063357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9090909 cdw11:090909d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.063371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.063428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.063442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.222 #41 NEW cov: 12118 ft: 14441 corp: 9/228b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:54.222 [2024-07-15 13:52:32.103097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.103122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.103180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.103194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.103254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.103268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.222 #42 NEW cov: 12118 ft: 14472 corp: 10/259b lim: 40 exec/s: 0 rss: 73Mb L: 31/37 MS: 1 CopyPart- 00:06:54.222 [2024-07-15 13:52:32.143210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.143239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.143326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.143341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.143398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.143411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.222 #43 NEW cov: 12118 ft: 14518 corp: 11/290b lim: 40 exec/s: 0 rss: 73Mb L: 31/37 MS: 1 ShuffleBytes- 00:06:54.222 [2024-07-15 13:52:32.193552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ad9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.193577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.193640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.193653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.193712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.193725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.193783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d90a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.193796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.222 #44 NEW cov: 12118 ft: 14617 corp: 12/322b lim: 40 exec/s: 0 rss: 73Mb L: 32/37 MS: 1 CrossOver- 00:06:54.222 [2024-07-15 13:52:32.233647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.233672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.233748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.233763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.233821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d90909 cdw11:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.233834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.233892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.233906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.222 #45 NEW cov: 12118 ft: 14676 corp: 13/360b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 CrossOver- 00:06:54.222 [2024-07-15 13:52:32.283763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.283789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.283854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.283868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.283924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d90909 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.283938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.222 [2024-07-15 13:52:32.283998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.222 [2024-07-15 13:52:32.284011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.481 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:54.481 #51 NEW cov: 12141 ft: 14719 corp: 14/398b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 CrossOver- 00:06:54.481 [2024-07-15 13:52:32.333936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ad9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.333961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.481 [2024-07-15 13:52:32.334021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.334035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.481 [2024-07-15 13:52:32.334108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff28 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.334122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.481 [2024-07-15 13:52:32.334181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d90a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.334194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.481 #52 NEW cov: 12141 ft: 14750 corp: 15/430b lim: 40 exec/s: 0 rss: 73Mb L: 32/38 MS: 1 CMP- DE: "\377\377\377\377\377\377\377("- 00:06:54.481 [2024-07-15 13:52:32.383567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff28d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.383592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.481 #53 NEW cov: 12141 ft: 14829 corp: 16/443b lim: 40 exec/s: 0 rss: 73Mb L: 13/38 MS: 1 CrossOver- 00:06:54.481 [2024-07-15 13:52:32.434185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9ff cdw11:ff0ad9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.434210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.481 [2024-07-15 13:52:32.434292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.434307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.481 [2024-07-15 13:52:32.434364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.434381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.481 [2024-07-15 13:52:32.434443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.434457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.481 #54 NEW cov: 12141 ft: 14878 corp: 17/477b lim: 40 exec/s: 54 rss: 73Mb L: 34/38 MS: 1 CrossOver- 00:06:54.481 [2024-07-15 13:52:32.484325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.484350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.481 [2024-07-15 13:52:32.484428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.481 [2024-07-15 13:52:32.484442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.482 [2024-07-15 13:52:32.484499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff28d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.482 [2024-07-15 13:52:32.484513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.482 [2024-07-15 13:52:32.484571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.482 [2024-07-15 13:52:32.484584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.482 #55 NEW cov: 12141 ft: 14891 corp: 18/516b lim: 40 exec/s: 55 rss: 73Mb L: 39/39 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377("- 00:06:54.482 [2024-07-15 13:52:32.524441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d91919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.482 [2024-07-15 13:52:32.524468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.482 [2024-07-15 13:52:32.524544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:191919d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.482 [2024-07-15 13:52:32.524559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.482 [2024-07-15 13:52:32.524617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9ffd9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.482 [2024-07-15 13:52:32.524631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.482 [2024-07-15 13:52:32.524691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.482 [2024-07-15 13:52:32.524704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.482 #56 NEW cov: 12141 ft: 14911 corp: 19/553b lim: 40 exec/s: 56 rss: 73Mb L: 37/39 MS: 1 CrossOver- 00:06:54.740 [2024-07-15 13:52:32.564589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-07-15 13:52:32.564615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.740 [2024-07-15 13:52:32.564680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:09d9d9d9 cdw11:d9d9d9ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-07-15 13:52:32.564698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.740 [2024-07-15 13:52:32.564755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff28d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-07-15 13:52:32.564769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.740 [2024-07-15 13:52:32.564830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-07-15 13:52:32.564844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.740 #57 NEW cov: 12141 ft: 14932 corp: 20/592b lim: 40 exec/s: 57 rss: 73Mb L: 39/39 MS: 1 CrossOver- 00:06:54.740 [2024-07-15 13:52:32.614275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff41ffff cdw11:ffffff28 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-07-15 13:52:32.614302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.740 #58 NEW cov: 12141 ft: 14949 corp: 21/606b lim: 40 exec/s: 58 rss: 73Mb L: 14/39 MS: 1 InsertByte- 00:06:54.740 [2024-07-15 13:52:32.664846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d91919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-07-15 13:52:32.664872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.740 [2024-07-15 13:52:32.664949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d919d919 cdw11:d919d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-07-15 13:52:32.664963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.740 [2024-07-15 13:52:32.665021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:90d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.665035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.665095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.665108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.741 #59 NEW cov: 12141 ft: 14959 corp: 22/643b lim: 40 exec/s: 59 rss: 73Mb L: 37/39 MS: 1 ShuffleBytes- 00:06:54.741 [2024-07-15 13:52:32.714499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:2dff28d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.714524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.741 #60 NEW cov: 12141 ft: 15054 corp: 23/656b lim: 40 exec/s: 60 rss: 73Mb L: 13/39 MS: 1 ChangeByte- 00:06:54.741 [2024-07-15 13:52:32.755245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.755271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.755332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9191919 cdw11:1919d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.755346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.755409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d990 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.755423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.755481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.755495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.755556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d90a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.755570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.741 #61 NEW cov: 12141 ft: 15127 corp: 24/696b lim: 40 exec/s: 61 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:06:54.741 [2024-07-15 13:52:32.795190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffd9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.795222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.795281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.795294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.795354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:09d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.795367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.741 [2024-07-15 13:52:32.795425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.741 [2024-07-15 13:52:32.795438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.001 #62 NEW cov: 12141 ft: 15129 corp: 25/733b lim: 40 exec/s: 62 rss: 73Mb L: 37/40 MS: 1 CrossOver- 00:06:55.001 [2024-07-15 13:52:32.835337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.835363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.835422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.835436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.835495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d90909 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.835509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.835565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d940 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.835578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.001 #63 NEW cov: 12141 ft: 15176 corp: 26/772b lim: 40 exec/s: 63 rss: 74Mb L: 39/40 MS: 1 InsertByte- 00:06:55.001 [2024-07-15 13:52:32.885472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:59d91919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.885498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.885571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d919d919 cdw11:d919d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.885585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.885646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:90d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.885659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.885714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.885728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.001 #64 NEW cov: 12141 ft: 15188 corp: 27/809b lim: 40 exec/s: 64 rss: 74Mb L: 37/40 MS: 1 ChangeBit- 00:06:55.001 [2024-07-15 13:52:32.935607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.935633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.935691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.935706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.935764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.935778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.935836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.935849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.001 #65 NEW cov: 12141 ft: 15253 corp: 28/848b lim: 40 exec/s: 65 rss: 74Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:06:55.001 [2024-07-15 13:52:32.975721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d927 cdw11:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.975747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.975807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:09d9d9d9 cdw11:d9d9d9ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.975821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.975879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff28d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.975893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:32.975951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:32.975968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.001 #66 NEW cov: 12141 ft: 15345 corp: 29/887b lim: 40 exec/s: 66 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:06:55.001 [2024-07-15 13:52:33.025835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d91919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:33.025860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:33.025935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d919d919 cdw11:d919d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:33.025949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:33.026007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:90d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:33.026020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.001 [2024-07-15 13:52:33.026082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d925d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:33.026096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.001 #67 NEW cov: 12141 ft: 15354 corp: 30/924b lim: 40 exec/s: 67 rss: 74Mb L: 37/40 MS: 1 ChangeBinInt- 00:06:55.001 [2024-07-15 13:52:33.065551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.001 [2024-07-15 13:52:33.065576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.261 #68 NEW cov: 12141 ft: 15367 corp: 31/937b lim: 40 exec/s: 68 rss: 74Mb L: 13/40 MS: 1 CopyPart- 00:06:55.261 [2024-07-15 13:52:33.106028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d8d9ff cdw11:ff0ad9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.106054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.106110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.106123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.106180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.106194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.106256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.106269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.261 #69 NEW cov: 12141 ft: 15372 corp: 32/971b lim: 40 exec/s: 69 rss: 74Mb L: 34/40 MS: 1 ChangeBit- 00:06:55.261 [2024-07-15 13:52:33.155907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.155932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.155996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.156010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.261 #70 NEW cov: 12141 ft: 15592 corp: 33/987b lim: 40 exec/s: 70 rss: 74Mb L: 16/40 MS: 1 CopyPart- 00:06:55.261 [2024-07-15 13:52:33.206343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.206368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.206446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:191919d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.206461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.206519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:90d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.206533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.206590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.206604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.261 #71 NEW cov: 12141 ft: 15607 corp: 34/1024b lim: 40 exec/s: 71 rss: 74Mb L: 37/40 MS: 1 CopyPart- 00:06:55.261 [2024-07-15 13:52:33.246314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d1d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.246338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.246394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.246408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.246467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.246480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.261 #72 NEW cov: 12141 ft: 15616 corp: 35/1055b lim: 40 exec/s: 72 rss: 74Mb L: 31/40 MS: 1 ChangeBit- 00:06:55.261 [2024-07-15 13:52:33.286611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d91919 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.286636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.286713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:191919d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.286727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.286785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:90d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.286799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.286858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.286872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.261 #73 NEW cov: 12141 ft: 15618 corp: 36/1092b lim: 40 exec/s: 73 rss: 74Mb L: 37/40 MS: 1 ChangeByte- 00:06:55.261 [2024-07-15 13:52:33.326721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.326746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.326822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:09d9d9d9 cdw11:d9d9d9ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.326836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.326894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:0100d726 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.326908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.261 [2024-07-15 13:52:33.326964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:26262629 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.261 [2024-07-15 13:52:33.326977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.521 #74 NEW cov: 12141 ft: 15627 corp: 37/1131b lim: 40 exec/s: 74 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:06:55.521 [2024-07-15 13:52:33.366459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d1d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.521 [2024-07-15 13:52:33.366485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.521 [2024-07-15 13:52:33.366561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d90a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.521 [2024-07-15 13:52:33.366575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.521 #75 NEW cov: 12141 ft: 15631 corp: 38/1147b lim: 40 exec/s: 75 rss: 74Mb L: 16/40 MS: 1 EraseBytes- 00:06:55.521 [2024-07-15 13:52:33.416454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9ff41ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.521 [2024-07-15 13:52:33.416479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.521 #76 NEW cov: 12141 ft: 15639 corp: 39/1157b lim: 40 exec/s: 38 rss: 74Mb L: 10/40 MS: 1 CrossOver- 00:06:55.521 #76 DONE cov: 12141 ft: 15639 corp: 39/1157b lim: 40 exec/s: 38 rss: 74Mb 00:06:55.521 ###### Recommended dictionary. ###### 00:06:55.521 "\377\377\377\377\377\377\377(" # Uses: 1 00:06:55.521 ###### End of recommended dictionary. ###### 00:06:55.521 Done 76 runs in 2 second(s) 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.521 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.780 13:52:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:55.780 [2024-07-15 13:52:33.633848] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:55.780 [2024-07-15 13:52:33.633953] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846016 ] 00:06:55.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.780 [2024-07-15 13:52:33.842489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.039 [2024-07-15 13:52:33.913686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.040 [2024-07-15 13:52:33.973186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.040 [2024-07-15 13:52:33.989510] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:56.040 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.040 INFO: Seed: 3009063444 00:06:56.040 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:56.040 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:56.040 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:56.040 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.040 #2 INITED exec/s: 0 rss: 65Mb 00:06:56.040 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.040 This may also happen if the target rejected all inputs we tried so far 00:06:56.040 [2024-07-15 13:52:34.054825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.040 [2024-07-15 13:52:34.054856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.606 NEW_FUNC[1/696]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:56.606 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.606 #9 NEW cov: 11895 ft: 11896 corp: 2/11b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 2 CrossOver-CMP- DE: "\001\000\000\000\000\000\000?"- 00:06:56.606 [2024-07-15 13:52:34.395905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.606 [2024-07-15 13:52:34.395982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.606 #10 NEW cov: 12025 ft: 12728 corp: 3/22b lim: 40 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 CrossOver- 00:06:56.606 [2024-07-15 13:52:34.446129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.446156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.607 [2024-07-15 13:52:34.446232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.446246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.607 [2024-07-15 13:52:34.446303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.446317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.607 [2024-07-15 13:52:34.446372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.446385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.607 #11 NEW cov: 12031 ft: 13802 corp: 4/57b lim: 40 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:56.607 [2024-07-15 13:52:34.495956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.495983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.607 [2024-07-15 13:52:34.496040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.496054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.607 #12 NEW cov: 12116 ft: 14292 corp: 5/75b lim: 40 exec/s: 0 rss: 72Mb L: 18/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:56.607 [2024-07-15 13:52:34.535908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.535933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.607 #13 NEW cov: 12116 ft: 14356 corp: 6/87b lim: 40 exec/s: 0 rss: 72Mb L: 12/35 MS: 1 CrossOver- 00:06:56.607 [2024-07-15 13:52:34.586046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:ae000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.586071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.607 #14 NEW cov: 12116 ft: 14417 corp: 7/98b lim: 40 exec/s: 0 rss: 72Mb L: 11/35 MS: 1 InsertByte- 00:06:56.607 [2024-07-15 13:52:34.626120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.626145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.607 #15 NEW cov: 12116 ft: 14510 corp: 8/111b lim: 40 exec/s: 0 rss: 72Mb L: 13/35 MS: 1 InsertByte- 00:06:56.607 [2024-07-15 13:52:34.676285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.607 [2024-07-15 13:52:34.676313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.865 #16 NEW cov: 12116 ft: 14572 corp: 9/122b lim: 40 exec/s: 0 rss: 72Mb L: 11/35 MS: 1 ChangeBinInt- 00:06:56.865 [2024-07-15 13:52:34.726383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.865 [2024-07-15 13:52:34.726408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.865 #17 NEW cov: 12116 ft: 14643 corp: 10/134b lim: 40 exec/s: 0 rss: 72Mb L: 12/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:56.865 [2024-07-15 13:52:34.766996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.865 [2024-07-15 13:52:34.767021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.865 [2024-07-15 13:52:34.767079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.865 [2024-07-15 13:52:34.767093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.865 [2024-07-15 13:52:34.767149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.767162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.866 [2024-07-15 13:52:34.767221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:2020202a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.767250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.866 #18 NEW cov: 12116 ft: 14675 corp: 11/170b lim: 40 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 InsertByte- 00:06:56.866 [2024-07-15 13:52:34.807084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00002020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.807109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.866 [2024-07-15 13:52:34.807164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.807178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.866 [2024-07-15 13:52:34.807256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.807271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.866 [2024-07-15 13:52:34.807326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20202a20 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.807339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.866 #19 NEW cov: 12116 ft: 14761 corp: 12/205b lim: 40 exec/s: 0 rss: 73Mb L: 35/36 MS: 1 CrossOver- 00:06:56.866 [2024-07-15 13:52:34.846710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:003a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.846735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.866 #20 NEW cov: 12116 ft: 14801 corp: 13/218b lim: 40 exec/s: 0 rss: 73Mb L: 13/36 MS: 1 InsertByte- 00:06:56.866 [2024-07-15 13:52:34.887292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.887317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.866 [2024-07-15 13:52:34.887374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.887387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.866 [2024-07-15 13:52:34.887444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.887457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.866 [2024-07-15 13:52:34.887515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.887528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.866 #21 NEW cov: 12116 ft: 14840 corp: 14/253b lim: 40 exec/s: 0 rss: 73Mb L: 35/36 MS: 1 ShuffleBytes- 00:06:56.866 [2024-07-15 13:52:34.926928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.866 [2024-07-15 13:52:34.926952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.124 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:57.124 #22 NEW cov: 12139 ft: 14963 corp: 15/264b lim: 40 exec/s: 0 rss: 73Mb L: 11/36 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:57.124 [2024-07-15 13:52:34.967048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.124 [2024-07-15 13:52:34.967075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.124 #23 NEW cov: 12139 ft: 14980 corp: 16/276b lim: 40 exec/s: 0 rss: 73Mb L: 12/36 MS: 1 InsertByte- 00:06:57.124 [2024-07-15 13:52:35.017204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.124 [2024-07-15 13:52:35.017236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.124 #24 NEW cov: 12139 ft: 14991 corp: 17/289b lim: 40 exec/s: 24 rss: 73Mb L: 13/36 MS: 1 CopyPart- 00:06:57.124 [2024-07-15 13:52:35.067312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.124 [2024-07-15 13:52:35.067337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.124 #25 NEW cov: 12139 ft: 15021 corp: 18/300b lim: 40 exec/s: 25 rss: 73Mb L: 11/36 MS: 1 ShuffleBytes- 00:06:57.124 [2024-07-15 13:52:35.107432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.124 [2024-07-15 13:52:35.107457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.124 #26 NEW cov: 12139 ft: 15042 corp: 19/314b lim: 40 exec/s: 26 rss: 73Mb L: 14/36 MS: 1 CrossOver- 00:06:57.124 [2024-07-15 13:52:35.157770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.124 [2024-07-15 13:52:35.157799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.124 [2024-07-15 13:52:35.157859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.124 [2024-07-15 13:52:35.157873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.124 #27 NEW cov: 12139 ft: 15048 corp: 20/333b lim: 40 exec/s: 27 rss: 73Mb L: 19/36 MS: 1 InsertByte- 00:06:57.382 [2024-07-15 13:52:35.207733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.207759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.382 #28 NEW cov: 12139 ft: 15106 corp: 21/344b lim: 40 exec/s: 28 rss: 73Mb L: 11/36 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:57.382 [2024-07-15 13:52:35.248010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.248035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.382 [2024-07-15 13:52:35.248092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ff0a0a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.248106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.382 #29 NEW cov: 12139 ft: 15123 corp: 22/363b lim: 40 exec/s: 29 rss: 73Mb L: 19/36 MS: 1 InsertRepeatedBytes- 00:06:57.382 [2024-07-15 13:52:35.297986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.298012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.382 #30 NEW cov: 12139 ft: 15211 corp: 23/374b lim: 40 exec/s: 30 rss: 73Mb L: 11/36 MS: 1 CopyPart- 00:06:57.382 [2024-07-15 13:52:35.348575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.348602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.382 [2024-07-15 13:52:35.348660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.348674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.382 [2024-07-15 13:52:35.348732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.348746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.382 [2024-07-15 13:52:35.348802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.348814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.382 #31 NEW cov: 12139 ft: 15221 corp: 24/409b lim: 40 exec/s: 31 rss: 73Mb L: 35/36 MS: 1 ChangeByte- 00:06:57.382 [2024-07-15 13:52:35.398404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:0a0a0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.398431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.382 [2024-07-15 13:52:35.398491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:c40a2001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.398505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.382 #32 NEW cov: 12139 ft: 15233 corp: 25/426b lim: 40 exec/s: 32 rss: 73Mb L: 17/36 MS: 1 InsertRepeatedBytes- 00:06:57.382 [2024-07-15 13:52:35.448389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.382 [2024-07-15 13:52:35.448414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.640 #33 NEW cov: 12139 ft: 15255 corp: 26/438b lim: 40 exec/s: 33 rss: 73Mb L: 12/36 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:57.640 [2024-07-15 13:52:35.488635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.488661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.640 [2024-07-15 13:52:35.488719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.488732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.640 #34 NEW cov: 12139 ft: 15275 corp: 27/456b lim: 40 exec/s: 34 rss: 73Mb L: 18/36 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:57.640 [2024-07-15 13:52:35.529064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.529090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.640 [2024-07-15 13:52:35.529148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.529161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.640 [2024-07-15 13:52:35.529222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.529235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.640 [2024-07-15 13:52:35.529294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:003f0a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.529308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.640 #35 NEW cov: 12139 ft: 15323 corp: 28/492b lim: 40 exec/s: 35 rss: 73Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:57.640 [2024-07-15 13:52:35.578752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.578778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.640 #36 NEW cov: 12139 ft: 15334 corp: 29/500b lim: 40 exec/s: 36 rss: 73Mb L: 8/36 MS: 1 EraseBytes- 00:06:57.640 [2024-07-15 13:52:35.618775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:2001009e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.618801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.640 #38 NEW cov: 12139 ft: 15354 corp: 30/508b lim: 40 exec/s: 38 rss: 73Mb L: 8/36 MS: 2 EraseBytes-InsertByte- 00:06:57.640 [2024-07-15 13:52:35.658926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.658952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.640 #39 NEW cov: 12139 ft: 15363 corp: 31/519b lim: 40 exec/s: 39 rss: 73Mb L: 11/36 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:57.640 [2024-07-15 13:52:35.699444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.699470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.640 [2024-07-15 13:52:35.699528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.699541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.640 [2024-07-15 13:52:35.699597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:201a2020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.699611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.640 [2024-07-15 13:52:35.699668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.640 [2024-07-15 13:52:35.699681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.898 #40 NEW cov: 12139 ft: 15376 corp: 32/554b lim: 40 exec/s: 40 rss: 73Mb L: 35/36 MS: 1 ChangeBinInt- 00:06:57.898 [2024-07-15 13:52:35.739072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.739097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.898 [2024-07-15 13:52:35.769195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:230a0100 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.769224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.898 #42 NEW cov: 12139 ft: 15384 corp: 33/562b lim: 40 exec/s: 42 rss: 73Mb L: 8/36 MS: 2 EraseBytes-ChangeByte- 00:06:57.898 [2024-07-15 13:52:35.809284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01004800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.809309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.898 #43 NEW cov: 12139 ft: 15398 corp: 34/573b lim: 40 exec/s: 43 rss: 73Mb L: 11/36 MS: 1 CMP- DE: "H\000\000\000\000\000\000\000"- 00:06:57.898 [2024-07-15 13:52:35.849387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000003f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.849412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.898 #44 NEW cov: 12139 ft: 15416 corp: 35/588b lim: 40 exec/s: 44 rss: 74Mb L: 15/36 MS: 1 InsertByte- 00:06:57.898 [2024-07-15 13:52:35.900027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.900055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.898 [2024-07-15 13:52:35.900114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.900128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.898 [2024-07-15 13:52:35.900187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:201a2020 cdw11:30202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.900200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.898 [2024-07-15 13:52:35.900260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20202020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.900273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.898 #45 NEW cov: 12139 ft: 15426 corp: 36/623b lim: 40 exec/s: 45 rss: 74Mb L: 35/36 MS: 1 ChangeBit- 00:06:57.898 [2024-07-15 13:52:35.949673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:0a01000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.898 [2024-07-15 13:52:35.949698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.157 #46 NEW cov: 12139 ft: 15431 corp: 37/634b lim: 40 exec/s: 46 rss: 74Mb L: 11/36 MS: 1 CopyPart- 00:06:58.157 [2024-07-15 13:52:35.999795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0104 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.157 [2024-07-15 13:52:35.999820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.157 #47 NEW cov: 12139 ft: 15442 corp: 38/647b lim: 40 exec/s: 47 rss: 74Mb L: 13/36 MS: 1 CMP- DE: "\001\004\000\000\000\000\000\000"- 00:06:58.157 [2024-07-15 13:52:36.039952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01004800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.157 [2024-07-15 13:52:36.039976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.157 #48 NEW cov: 12139 ft: 15469 corp: 39/658b lim: 40 exec/s: 24 rss: 74Mb L: 11/36 MS: 1 ShuffleBytes- 00:06:58.157 #48 DONE cov: 12139 ft: 15469 corp: 39/658b lim: 40 exec/s: 24 rss: 74Mb 00:06:58.157 ###### Recommended dictionary. ###### 00:06:58.157 "\001\000\000\000\000\000\000?" # Uses: 7 00:06:58.157 "H\000\000\000\000\000\000\000" # Uses: 0 00:06:58.157 "\001\004\000\000\000\000\000\000" # Uses: 0 00:06:58.157 ###### End of recommended dictionary. ###### 00:06:58.157 Done 48 runs in 2 second(s) 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.157 13:52:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:58.415 [2024-07-15 13:52:36.252102] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.415 [2024-07-15 13:52:36.252179] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846390 ] 00:06:58.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.415 [2024-07-15 13:52:36.446332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.673 [2024-07-15 13:52:36.515949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.673 [2024-07-15 13:52:36.575325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.673 [2024-07-15 13:52:36.591626] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:58.673 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.673 INFO: Seed: 1315113363 00:06:58.673 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:06:58.673 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:06:58.673 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:58.673 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.673 #2 INITED exec/s: 0 rss: 65Mb 00:06:58.673 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.673 This may also happen if the target rejected all inputs we tried so far 00:06:58.673 [2024-07-15 13:52:36.656860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a3c3c cdw11:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.673 [2024-07-15 13:52:36.656890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.931 NEW_FUNC[1/695]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:58.931 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:58.931 #20 NEW cov: 11881 ft: 11879 corp: 2/11b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:06:58.931 [2024-07-15 13:52:36.998345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.931 [2024-07-15 13:52:36.998407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.931 [2024-07-15 13:52:36.998493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.931 [2024-07-15 13:52:36.998520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.931 [2024-07-15 13:52:36.998606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.931 [2024-07-15 13:52:36.998631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.931 [2024-07-15 13:52:36.998711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.931 [2024-07-15 13:52:36.998736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.190 #22 NEW cov: 12013 ft: 13194 corp: 3/46b lim: 40 exec/s: 0 rss: 72Mb L: 35/35 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:59.190 [2024-07-15 13:52:37.048092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.048120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.048176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.048189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.048247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.048260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.048315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.048328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.190 #23 NEW cov: 12019 ft: 13367 corp: 4/82b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 CrossOver- 00:06:59.190 [2024-07-15 13:52:37.097987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.098015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.098074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.098088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.190 #24 NEW cov: 12104 ft: 13812 corp: 5/99b lim: 40 exec/s: 0 rss: 72Mb L: 17/36 MS: 1 CrossOver- 00:06:59.190 [2024-07-15 13:52:37.138445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:69959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.138471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.138529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.138543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.138599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.138616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.138670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.138684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.190 #25 NEW cov: 12104 ft: 13954 corp: 6/135b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 ChangeByte- 00:06:59.190 [2024-07-15 13:52:37.188299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.188323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.188379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95950a95 cdw11:95959530 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.188393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.190 #26 NEW cov: 12104 ft: 14001 corp: 7/152b lim: 40 exec/s: 0 rss: 72Mb L: 17/36 MS: 1 ChangeByte- 00:06:59.190 [2024-07-15 13:52:37.238640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.238666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.238739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.238753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.238807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.238821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.190 [2024-07-15 13:52:37.238875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.190 [2024-07-15 13:52:37.238888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.448 #27 NEW cov: 12104 ft: 14130 corp: 8/188b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 ShuffleBytes- 00:06:59.448 [2024-07-15 13:52:37.278716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.448 [2024-07-15 13:52:37.278741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.448 [2024-07-15 13:52:37.278797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.448 [2024-07-15 13:52:37.278811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.448 [2024-07-15 13:52:37.278884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.278898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.278951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.278968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.449 #28 NEW cov: 12104 ft: 14155 corp: 9/224b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 ShuffleBytes- 00:06:59.449 [2024-07-15 13:52:37.328845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.328870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.328943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.328957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.329011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:950a9595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.329025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.329079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.329093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.449 #29 NEW cov: 12104 ft: 14184 corp: 10/260b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 CopyPart- 00:06:59.449 [2024-07-15 13:52:37.378989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.379014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.379071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:d5959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.379085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.379140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.379153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.379208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.379226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.449 #30 NEW cov: 12104 ft: 14276 corp: 11/296b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 ChangeBit- 00:06:59.449 [2024-07-15 13:52:37.419097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.419122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.419195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:88389916 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.419209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.419269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:950a9595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.419286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.419343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.419357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.449 #31 NEW cov: 12104 ft: 14331 corp: 12/332b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 CMP- DE: "\2108\231\026\000\000\000\000"- 00:06:59.449 [2024-07-15 13:52:37.469229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.469254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.469308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.469321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.469376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:9595950a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.469389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.469443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.469455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.449 #32 NEW cov: 12104 ft: 14368 corp: 13/369b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertByte- 00:06:59.449 [2024-07-15 13:52:37.509094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a3c3c cdw11:3c959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.509118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.449 [2024-07-15 13:52:37.509178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:9595953c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.449 [2024-07-15 13:52:37.509192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.707 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:59.707 #33 NEW cov: 12127 ft: 14449 corp: 14/388b lim: 40 exec/s: 0 rss: 72Mb L: 19/37 MS: 1 CrossOver- 00:06:59.707 [2024-07-15 13:52:37.569509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:d5959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.569534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.569592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.569605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.569675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.569693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.569746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.569760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.707 #34 NEW cov: 12127 ft: 14471 corp: 15/424b lim: 40 exec/s: 0 rss: 72Mb L: 36/37 MS: 1 CrossOver- 00:06:59.707 [2024-07-15 13:52:37.609386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.609411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.609485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.609499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.707 #35 NEW cov: 12127 ft: 14487 corp: 16/446b lim: 40 exec/s: 0 rss: 72Mb L: 22/37 MS: 1 InsertRepeatedBytes- 00:06:59.707 [2024-07-15 13:52:37.649360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a2a0a3c cdw11:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.649384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.707 #36 NEW cov: 12127 ft: 14509 corp: 17/457b lim: 40 exec/s: 36 rss: 72Mb L: 11/37 MS: 1 InsertByte- 00:06:59.707 [2024-07-15 13:52:37.689587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.689613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.689669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.689683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.707 #37 NEW cov: 12127 ft: 14528 corp: 18/479b lim: 40 exec/s: 37 rss: 73Mb L: 22/37 MS: 1 ChangeBit- 00:06:59.707 [2024-07-15 13:52:37.739956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959524 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.739980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.740054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.740068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.740123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.740136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.707 [2024-07-15 13:52:37.740193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.707 [2024-07-15 13:52:37.740206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.707 #38 NEW cov: 12127 ft: 14580 corp: 19/515b lim: 40 exec/s: 38 rss: 73Mb L: 36/37 MS: 1 ChangeBinInt- 00:06:59.978 [2024-07-15 13:52:37.779828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95958838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.779853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.779909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:99160000 cdw11:00009595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.779923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.978 #39 NEW cov: 12127 ft: 14631 corp: 20/537b lim: 40 exec/s: 39 rss: 73Mb L: 22/37 MS: 1 PersAutoDict- DE: "\2108\231\026\000\000\000\000"- 00:06:59.978 [2024-07-15 13:52:37.820169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:69959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.820192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.820267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.820281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.820338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.820352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.820409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.820422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.978 #40 NEW cov: 12127 ft: 14635 corp: 21/573b lim: 40 exec/s: 40 rss: 73Mb L: 36/37 MS: 1 ShuffleBytes- 00:06:59.978 [2024-07-15 13:52:37.870320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.870344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.870414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:d5959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.870429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.870487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.870501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.870557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:950a9595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.870570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.978 #41 NEW cov: 12127 ft: 14646 corp: 22/609b lim: 40 exec/s: 41 rss: 73Mb L: 36/37 MS: 1 CrossOver- 00:06:59.978 [2024-07-15 13:52:37.920450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.920477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.920552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95958838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.920566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.920621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:99160000 cdw11:00009595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.920635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.920690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.920704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.978 #42 NEW cov: 12127 ft: 14657 corp: 23/645b lim: 40 exec/s: 42 rss: 73Mb L: 36/37 MS: 1 PersAutoDict- DE: "\2108\231\026\000\000\000\000"- 00:06:59.978 [2024-07-15 13:52:37.960598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959524 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.960623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.960697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.960711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.960766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95ffffff cdw11:ff950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.960780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.978 [2024-07-15 13:52:37.960837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:37.960850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.978 #43 NEW cov: 12127 ft: 14675 corp: 24/681b lim: 40 exec/s: 43 rss: 73Mb L: 36/37 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:59.978 [2024-07-15 13:52:38.010341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a2a0a3c cdw11:3c3c743c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.978 [2024-07-15 13:52:38.010376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.978 #44 NEW cov: 12127 ft: 14714 corp: 25/692b lim: 40 exec/s: 44 rss: 73Mb L: 11/37 MS: 1 ChangeByte- 00:07:00.238 [2024-07-15 13:52:38.060940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:69959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.060965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.061023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.061037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.061091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:959595ff cdw11:ffffff95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.061108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.061161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.061174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.061251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:95959595 cdw11:9595950a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.061264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.238 #45 NEW cov: 12127 ft: 14755 corp: 26/732b lim: 40 exec/s: 45 rss: 73Mb L: 40/40 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:00.238 [2024-07-15 13:52:38.100818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.100844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.100904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.100917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.100986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0a959595 cdw11:9595950a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.100999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.238 #46 NEW cov: 12127 ft: 14936 corp: 27/762b lim: 40 exec/s: 46 rss: 73Mb L: 30/40 MS: 1 CopyPart- 00:07:00.238 [2024-07-15 13:52:38.140803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.140828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.140885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95958838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.140898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.238 #47 NEW cov: 12127 ft: 14951 corp: 28/780b lim: 40 exec/s: 47 rss: 73Mb L: 18/40 MS: 1 EraseBytes- 00:07:00.238 [2024-07-15 13:52:38.190937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:9595952b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.190963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.191034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:9595950a cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.191048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.238 #48 NEW cov: 12127 ft: 14991 corp: 29/798b lim: 40 exec/s: 48 rss: 73Mb L: 18/40 MS: 1 InsertByte- 00:07:00.238 [2024-07-15 13:52:38.231342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:d5959495 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.231370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.231424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.231438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.231493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.231506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.231562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.231575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.238 #49 NEW cov: 12127 ft: 15036 corp: 30/834b lim: 40 exec/s: 49 rss: 73Mb L: 36/40 MS: 1 ChangeBit- 00:07:00.238 [2024-07-15 13:52:38.281598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:69959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.281623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.281696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.281710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.281764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:959595ff cdw11:ffffff95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.281778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.281835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.281848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.238 [2024-07-15 13:52:38.281905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:95959595 cdw11:9595950a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.238 [2024-07-15 13:52:38.281918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.498 #50 NEW cov: 12127 ft: 15037 corp: 31/874b lim: 40 exec/s: 50 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:07:00.498 [2024-07-15 13:52:38.331359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a3c3c cdw11:3c959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.331386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.331444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:9595df95 cdw11:9595953c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.331458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.498 #51 NEW cov: 12127 ft: 15081 corp: 32/893b lim: 40 exec/s: 51 rss: 73Mb L: 19/40 MS: 1 ChangeByte- 00:07:00.498 [2024-07-15 13:52:38.381718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.381746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.381803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.381817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.381888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.381902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.381956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:9595959d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.381970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.498 #52 NEW cov: 12127 ft: 15089 corp: 33/929b lim: 40 exec/s: 52 rss: 73Mb L: 36/40 MS: 1 ChangeBit- 00:07:00.498 [2024-07-15 13:52:38.421826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.421851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.421911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:d5959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.421924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.421978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959568 cdw11:95950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.421991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.422051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.422063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.498 #53 NEW cov: 12127 ft: 15102 corp: 34/965b lim: 40 exec/s: 53 rss: 73Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:00.498 [2024-07-15 13:52:38.462084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.462110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.462167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.462180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.498 [2024-07-15 13:52:38.462246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:9595955d cdw11:5d5d5d95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.498 [2024-07-15 13:52:38.462260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.462316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.462332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.462386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:95959595 cdw11:9595950a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.462399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.499 #54 NEW cov: 12127 ft: 15105 corp: 35/1005b lim: 40 exec/s: 54 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:00.499 [2024-07-15 13:52:38.502223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.502250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.502309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.502323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.502379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:9595955d cdw11:5d5d5d95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.502392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.502449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95950a95 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.502462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.502519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:95959595 cdw11:9595950a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.502532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.499 #55 NEW cov: 12127 ft: 15120 corp: 36/1045b lim: 40 exec/s: 55 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:00.499 [2024-07-15 13:52:38.552195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959524 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.552225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.552298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.552313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.552369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95ffffff cdw11:ff950a95 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.552383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.499 [2024-07-15 13:52:38.552440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:95959595 cdw11:88389916 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.499 [2024-07-15 13:52:38.552453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.758 #56 NEW cov: 12127 ft: 15134 corp: 37/1081b lim: 40 exec/s: 56 rss: 73Mb L: 36/40 MS: 1 PersAutoDict- DE: "\2108\231\026\000\000\000\000"- 00:07:00.758 [2024-07-15 13:52:38.602341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:69959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.758 [2024-07-15 13:52:38.602377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.758 [2024-07-15 13:52:38.602452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.758 [2024-07-15 13:52:38.602465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.758 [2024-07-15 13:52:38.602521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.758 [2024-07-15 13:52:38.602534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.758 [2024-07-15 13:52:38.602592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:950a9595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.758 [2024-07-15 13:52:38.602605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.758 #57 NEW cov: 12127 ft: 15140 corp: 38/1120b lim: 40 exec/s: 57 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:07:00.758 [2024-07-15 13:52:38.642177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:95959595 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.758 [2024-07-15 13:52:38.642203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.759 [2024-07-15 13:52:38.642283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95950a95 cdw11:95959530 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.759 [2024-07-15 13:52:38.642298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.759 #58 NEW cov: 12127 ft: 15182 corp: 39/1137b lim: 40 exec/s: 29 rss: 73Mb L: 17/40 MS: 1 ChangeASCIIInt- 00:07:00.759 #58 DONE cov: 12127 ft: 15182 corp: 39/1137b lim: 40 exec/s: 29 rss: 73Mb 00:07:00.759 ###### Recommended dictionary. ###### 00:07:00.759 "\2108\231\026\000\000\000\000" # Uses: 3 00:07:00.759 "\377\377\377\377" # Uses: 1 00:07:00.759 ###### End of recommended dictionary. ###### 00:07:00.759 Done 58 runs in 2 second(s) 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:00.759 13:52:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:01.018 [2024-07-15 13:52:38.851762] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:01.018 [2024-07-15 13:52:38.851826] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846761 ] 00:07:01.018 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.018 [2024-07-15 13:52:39.050472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.277 [2024-07-15 13:52:39.121389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.277 [2024-07-15 13:52:39.180728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.277 [2024-07-15 13:52:39.197018] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:01.277 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.277 INFO: Seed: 3920118369 00:07:01.277 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:01.277 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:01.277 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:01.277 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.277 #2 INITED exec/s: 0 rss: 65Mb 00:07:01.277 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.277 This may also happen if the target rejected all inputs we tried so far 00:07:01.277 [2024-07-15 13:52:39.255177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.277 [2024-07-15 13:52:39.255209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.277 [2024-07-15 13:52:39.255276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.277 [2024-07-15 13:52:39.255291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.277 [2024-07-15 13:52:39.255353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.277 [2024-07-15 13:52:39.255368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.277 [2024-07-15 13:52:39.255428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.277 [2024-07-15 13:52:39.255443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.537 NEW_FUNC[1/696]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:01.537 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.537 #3 NEW cov: 11856 ft: 11855 corp: 2/33b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:01.537 [2024-07-15 13:52:39.606318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.537 [2024-07-15 13:52:39.606388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.537 [2024-07-15 13:52:39.606477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.537 [2024-07-15 13:52:39.606502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.537 [2024-07-15 13:52:39.606587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.537 [2024-07-15 13:52:39.606612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.537 [2024-07-15 13:52:39.606696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.537 [2024-07-15 13:52:39.606721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.796 #4 NEW cov: 12007 ft: 12522 corp: 3/66b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 InsertByte- 00:07:01.796 [2024-07-15 13:52:39.666059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.666090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.666166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.666180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.666238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.666252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.666308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.666321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.796 #5 NEW cov: 12013 ft: 12766 corp: 4/100b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertByte- 00:07:01.796 [2024-07-15 13:52:39.716400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.716429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.716504] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.716519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.716579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.716592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.716650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.716664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.716721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.716735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.796 #6 NEW cov: 12098 ft: 13119 corp: 5/135b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertByte- 00:07:01.796 [2024-07-15 13:52:39.766437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.766464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.766540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.766554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.766611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.796 [2024-07-15 13:52:39.766624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.796 [2024-07-15 13:52:39.766681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.766694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.766751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.766764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.797 #7 NEW cov: 12098 ft: 13234 corp: 6/170b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:07:01.797 [2024-07-15 13:52:39.816450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.816477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.816554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.816568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.816627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.816641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.816700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:7 cdw10:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.816713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.797 #8 NEW cov: 12098 ft: 13387 corp: 7/200b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 1 EraseBytes- 00:07:01.797 [2024-07-15 13:52:39.856757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.856783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.856846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.856859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.856932] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.856945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.857008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.857022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.797 [2024-07-15 13:52:39.857079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.797 [2024-07-15 13:52:39.857093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.056 #9 NEW cov: 12098 ft: 13425 corp: 8/235b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:02.056 [2024-07-15 13:52:39.906751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.906778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.906836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.906849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.906906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.906920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.906977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.906990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.056 #10 NEW cov: 12098 ft: 13444 corp: 9/267b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 ChangeByte- 00:07:02.056 [2024-07-15 13:52:39.946825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.946851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.946908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.946922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.946978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.946991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.947047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.947060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.056 #11 NEW cov: 12098 ft: 13499 corp: 10/299b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 CrossOver- 00:07:02.056 [2024-07-15 13:52:39.996958] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.996984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.997060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.997078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.997135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.997149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:39.997206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:39.997225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.056 #12 NEW cov: 12098 ft: 13538 corp: 11/330b lim: 35 exec/s: 0 rss: 73Mb L: 31/35 MS: 1 InsertByte- 00:07:02.056 [2024-07-15 13:52:40.066971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:40.067006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:40.067068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:40.067083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.056 #13 NEW cov: 12098 ft: 13865 corp: 12/349b lim: 35 exec/s: 0 rss: 73Mb L: 19/35 MS: 1 EraseBytes- 00:07:02.056 [2024-07-15 13:52:40.107285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:40.107320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:40.107380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:40.107394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:40.107450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:40.107464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.056 [2024-07-15 13:52:40.107521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.056 [2024-07-15 13:52:40.107535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.315 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:02.315 #14 NEW cov: 12121 ft: 13881 corp: 13/383b lim: 35 exec/s: 0 rss: 73Mb L: 34/35 MS: 1 CopyPart- 00:07:02.315 [2024-07-15 13:52:40.147435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.147462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.147518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.147533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.147591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.147605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.147666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.147683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.315 #15 NEW cov: 12121 ft: 13901 corp: 14/416b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:02.315 [2024-07-15 13:52:40.187677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.187702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.187776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.187790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.187847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.187861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.187920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000034 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.187934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.187994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.188008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.315 #16 NEW cov: 12121 ft: 14003 corp: 15/451b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeASCIIInt- 00:07:02.315 [2024-07-15 13:52:40.227629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.227655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.315 [2024-07-15 13:52:40.227712] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.315 [2024-07-15 13:52:40.227726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.227784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.227798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.227857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.227870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.316 #17 NEW cov: 12121 ft: 14016 corp: 16/484b lim: 35 exec/s: 17 rss: 73Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:02.316 [2024-07-15 13:52:40.267744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.267769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.267842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.267857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.267918] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.267932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.267987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.268001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.316 #18 NEW cov: 12121 ft: 14018 corp: 17/516b lim: 35 exec/s: 18 rss: 73Mb L: 32/35 MS: 1 ShuffleBytes- 00:07:02.316 [2024-07-15 13:52:40.317928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.317953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.318012] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.318026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.318082] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.318096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.318154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.318167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.316 #19 NEW cov: 12121 ft: 14049 corp: 18/548b lim: 35 exec/s: 19 rss: 73Mb L: 32/35 MS: 1 ChangeBinInt- 00:07:02.316 [2024-07-15 13:52:40.358188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.358214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.358276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.358290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.358349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.358363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.358422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000034 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.358435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.316 [2024-07-15 13:52:40.358493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.316 [2024-07-15 13:52:40.358507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.575 #20 NEW cov: 12121 ft: 14082 corp: 19/583b lim: 35 exec/s: 20 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:02.575 [2024-07-15 13:52:40.408153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.575 [2024-07-15 13:52:40.408179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.575 [2024-07-15 13:52:40.408249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.575 [2024-07-15 13:52:40.408264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.575 [2024-07-15 13:52:40.408322] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.575 [2024-07-15 13:52:40.408336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.575 [2024-07-15 13:52:40.408394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.575 [2024-07-15 13:52:40.408408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.575 #21 NEW cov: 12121 ft: 14110 corp: 20/615b lim: 35 exec/s: 21 rss: 73Mb L: 32/35 MS: 1 CrossOver- 00:07:02.575 [2024-07-15 13:52:40.448324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.575 [2024-07-15 13:52:40.448351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.575 [2024-07-15 13:52:40.448410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.575 [2024-07-15 13:52:40.448426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.575 [2024-07-15 13:52:40.448484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.575 [2024-07-15 13:52:40.448498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.448554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.448569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.576 #22 NEW cov: 12121 ft: 14127 corp: 21/647b lim: 35 exec/s: 22 rss: 73Mb L: 32/35 MS: 1 CopyPart- 00:07:02.576 [2024-07-15 13:52:40.498450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.498476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.498532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.498546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.498601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.498614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.498667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:7 cdw10:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.498681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.576 #23 NEW cov: 12121 ft: 14136 corp: 22/677b lim: 35 exec/s: 23 rss: 73Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:02.576 [2024-07-15 13:52:40.538514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.538541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.538604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.538617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.538676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.538689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.538746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.538760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.576 #24 NEW cov: 12121 ft: 14177 corp: 23/709b lim: 35 exec/s: 24 rss: 73Mb L: 32/35 MS: 1 CrossOver- 00:07:02.576 [2024-07-15 13:52:40.578654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.578680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.578739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.578753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.578809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.578823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.578880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.578893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.576 #25 NEW cov: 12121 ft: 14199 corp: 24/740b lim: 35 exec/s: 25 rss: 73Mb L: 31/35 MS: 1 ChangeBinInt- 00:07:02.576 [2024-07-15 13:52:40.628446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.628472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.576 [2024-07-15 13:52:40.628533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.576 [2024-07-15 13:52:40.628547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.835 #26 NEW cov: 12121 ft: 14261 corp: 25/760b lim: 35 exec/s: 26 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:07:02.835 [2024-07-15 13:52:40.678906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.678931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.679007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.679022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.679080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.679094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.679154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.679168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.835 #27 NEW cov: 12121 ft: 14302 corp: 26/792b lim: 35 exec/s: 27 rss: 74Mb L: 32/35 MS: 1 ChangeBit- 00:07:02.835 [2024-07-15 13:52:40.729166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.729193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.729252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.729267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.729324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.729337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.729395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.729408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.729466] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.729479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.835 #28 NEW cov: 12121 ft: 14311 corp: 27/827b lim: 35 exec/s: 28 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:02.835 [2024-07-15 13:52:40.769335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.769361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.769423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.769436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.769494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.769508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.769567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.769580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.769636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.769649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.835 #29 NEW cov: 12121 ft: 14339 corp: 28/862b lim: 35 exec/s: 29 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:02.835 [2024-07-15 13:52:40.809150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.809178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.809240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.809255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.809315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.809329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.835 #30 NEW cov: 12121 ft: 14521 corp: 29/889b lim: 35 exec/s: 30 rss: 74Mb L: 27/35 MS: 1 InsertRepeatedBytes- 00:07:02.835 [2024-07-15 13:52:40.859501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.859528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.859589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.859605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.859665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.859682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.835 [2024-07-15 13:52:40.859739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:7 cdw10:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.835 [2024-07-15 13:52:40.859754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.835 #31 NEW cov: 12128 ft: 14532 corp: 30/919b lim: 35 exec/s: 31 rss: 74Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:03.094 [2024-07-15 13:52:40.909640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.909667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:40.909746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.909761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:40.909820] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.909835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:40.909892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.909907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.094 #32 NEW cov: 12128 ft: 14545 corp: 31/950b lim: 35 exec/s: 32 rss: 74Mb L: 31/35 MS: 1 ShuffleBytes- 00:07:03.094 [2024-07-15 13:52:40.959681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.959708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:40.959783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000041 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.959801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:40.959860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.959874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:40.959929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:40.959943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.094 #33 NEW cov: 12128 ft: 14551 corp: 32/983b lim: 35 exec/s: 33 rss: 74Mb L: 33/35 MS: 1 ChangeBit- 00:07:03.094 [2024-07-15 13:52:41.009997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:41.010024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:41.010101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:41.010116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:41.010175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:41.010188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:41.010257] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:41.010272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:41.010327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:41.010341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.094 #34 NEW cov: 12128 ft: 14564 corp: 33/1018b lim: 35 exec/s: 34 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:03.094 [2024-07-15 13:52:41.059600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:41.059626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.094 [2024-07-15 13:52:41.059703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.094 [2024-07-15 13:52:41.059717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.094 #35 NEW cov: 12128 ft: 14615 corp: 34/1035b lim: 35 exec/s: 35 rss: 75Mb L: 17/35 MS: 1 EraseBytes- 00:07:03.094 [2024-07-15 13:52:41.100233] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.100260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.095 [2024-07-15 13:52:41.100339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.100354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.095 [2024-07-15 13:52:41.100412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.100429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.095 [2024-07-15 13:52:41.100486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.100500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.095 [2024-07-15 13:52:41.100557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.100572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.095 #36 NEW cov: 12128 ft: 14623 corp: 35/1070b lim: 35 exec/s: 36 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:03.095 [2024-07-15 13:52:41.150193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.150225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.095 [2024-07-15 13:52:41.150303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.150317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.095 [2024-07-15 13:52:41.150375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.150389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.095 [2024-07-15 13:52:41.150446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.095 [2024-07-15 13:52:41.150460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.353 #37 NEW cov: 12128 ft: 14638 corp: 36/1104b lim: 35 exec/s: 37 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:07:03.353 [2024-07-15 13:52:41.190427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.190455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.190531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.190546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.190605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.190620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.190677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.190690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.190747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.190761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.353 #38 NEW cov: 12128 ft: 14648 corp: 37/1139b lim: 35 exec/s: 38 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:03.353 [2024-07-15 13:52:41.230541] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.230569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.230630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.230660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.230720] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.230734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.230792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.230806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.353 [2024-07-15 13:52:41.230863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000051 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.353 [2024-07-15 13:52:41.230876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.353 #39 NEW cov: 12128 ft: 14654 corp: 38/1174b lim: 35 exec/s: 19 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:03.353 #39 DONE cov: 12128 ft: 14654 corp: 38/1174b lim: 35 exec/s: 19 rss: 75Mb 00:07:03.353 Done 39 runs in 2 second(s) 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.353 13:52:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:03.611 [2024-07-15 13:52:41.433787] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:03.611 [2024-07-15 13:52:41.433858] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847125 ] 00:07:03.611 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.611 [2024-07-15 13:52:41.658904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.870 [2024-07-15 13:52:41.729579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.870 [2024-07-15 13:52:41.789296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.870 [2024-07-15 13:52:41.805594] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:03.870 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.870 INFO: Seed: 2233139160 00:07:03.870 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:03.870 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:03.870 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:03.870 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.870 #2 INITED exec/s: 0 rss: 65Mb 00:07:03.870 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.870 This may also happen if the target rejected all inputs we tried so far 00:07:03.870 [2024-07-15 13:52:41.864602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.870 [2024-07-15 13:52:41.864631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.128 NEW_FUNC[1/695]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:04.128 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.128 #13 NEW cov: 11863 ft: 11864 corp: 2/10b lim: 35 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:04.387 [2024-07-15 13:52:42.205801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.205860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.205948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.205974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.387 #14 NEW cov: 11995 ft: 12772 corp: 3/28b lim: 35 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 CrossOver- 00:07:04.387 [2024-07-15 13:52:42.265916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.265945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.266022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.266036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.266090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.266106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.266161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.266177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.387 #15 NEW cov: 12001 ft: 13638 corp: 4/58b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:04.387 [2024-07-15 13:52:42.306034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.306061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.306121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.306136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.306192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.306205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.306267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.306281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.387 #16 NEW cov: 12086 ft: 13929 corp: 5/88b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBit- 00:07:04.387 [2024-07-15 13:52:42.356046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.356073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.356132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.356147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.356204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.356223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.387 #17 NEW cov: 12086 ft: 14150 corp: 6/115b lim: 35 exec/s: 0 rss: 72Mb L: 27/30 MS: 1 CrossOver- 00:07:04.387 [2024-07-15 13:52:42.406329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.406355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.406429] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.406443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.406500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.406514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.406570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000077d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.406583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.387 #18 NEW cov: 12086 ft: 14266 corp: 7/146b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 InsertByte- 00:07:04.387 [2024-07-15 13:52:42.456212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.456244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.387 [2024-07-15 13:52:42.456304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.387 [2024-07-15 13:52:42.456318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.647 #19 NEW cov: 12086 ft: 14408 corp: 8/160b lim: 35 exec/s: 0 rss: 72Mb L: 14/31 MS: 1 EraseBytes- 00:07:04.647 [2024-07-15 13:52:42.496312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.496338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.496396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.496410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.647 #20 NEW cov: 12086 ft: 14450 corp: 9/174b lim: 35 exec/s: 0 rss: 72Mb L: 14/31 MS: 1 CrossOver- 00:07:04.647 [2024-07-15 13:52:42.546530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.546554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.546614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.546643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.546703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.546716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.647 #21 NEW cov: 12086 ft: 14460 corp: 10/198b lim: 35 exec/s: 0 rss: 72Mb L: 24/31 MS: 1 EraseBytes- 00:07:04.647 [2024-07-15 13:52:42.596827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.596853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.596927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.596942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.596997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.597011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.597067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.597081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.647 #22 NEW cov: 12086 ft: 14509 corp: 11/229b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 CMP- DE: "\010\000\000\000"- 00:07:04.647 [2024-07-15 13:52:42.636662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.636689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.636767] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.636782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.647 #23 NEW cov: 12086 ft: 14564 corp: 12/247b lim: 35 exec/s: 0 rss: 72Mb L: 18/31 MS: 1 ChangeByte- 00:07:04.647 [2024-07-15 13:52:42.677030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.677054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.677109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000001e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.677122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.677181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.677194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.677271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.677284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.647 #24 NEW cov: 12086 ft: 14598 corp: 13/277b lim: 35 exec/s: 0 rss: 72Mb L: 30/31 MS: 1 ChangeBinInt- 00:07:04.647 [2024-07-15 13:52:42.716928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.716953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.647 [2024-07-15 13:52:42.717012] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.647 [2024-07-15 13:52:42.717026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.906 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:04.906 #25 NEW cov: 12109 ft: 14660 corp: 14/295b lim: 35 exec/s: 0 rss: 73Mb L: 18/31 MS: 1 ChangeBit- 00:07:04.906 [2024-07-15 13:52:42.766971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.766995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.906 #26 NEW cov: 12109 ft: 14681 corp: 15/306b lim: 35 exec/s: 0 rss: 73Mb L: 11/31 MS: 1 EraseBytes- 00:07:04.906 [2024-07-15 13:52:42.807220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.807262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.906 [2024-07-15 13:52:42.807336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.807351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.906 [2024-07-15 13:52:42.807407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.807421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.906 #27 NEW cov: 12109 ft: 14712 corp: 16/333b lim: 35 exec/s: 0 rss: 73Mb L: 27/31 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:04.906 [2024-07-15 13:52:42.847391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.847415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.906 [2024-07-15 13:52:42.847475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.847489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.906 [2024-07-15 13:52:42.847547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.847560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.906 #28 NEW cov: 12109 ft: 14718 corp: 17/360b lim: 35 exec/s: 28 rss: 73Mb L: 27/31 MS: 1 CrossOver- 00:07:04.906 [2024-07-15 13:52:42.897403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.897428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.906 [2024-07-15 13:52:42.897500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.897514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.906 #29 NEW cov: 12109 ft: 14756 corp: 18/374b lim: 35 exec/s: 29 rss: 73Mb L: 14/31 MS: 1 ChangeBinInt- 00:07:04.906 [2024-07-15 13:52:42.937773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.937798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.906 [2024-07-15 13:52:42.937873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.937887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.906 [2024-07-15 13:52:42.938003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.906 [2024-07-15 13:52:42.938017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.906 NEW_FUNC[1/3]: 0x4b4bb0 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:07:05.165 NEW_FUNC[2/3]: 0x11dea50 in nvmf_ctrlr_get_features_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1686 00:07:05.165 #30 NEW cov: 12164 ft: 14837 corp: 19/405b lim: 35 exec/s: 30 rss: 73Mb L: 31/31 MS: 1 ChangeBit- 00:07:05.165 [2024-07-15 13:52:42.997810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:42.997835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:42.997910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:42.997924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:42.997985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:42.997998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.165 #31 NEW cov: 12164 ft: 14856 corp: 20/427b lim: 35 exec/s: 31 rss: 73Mb L: 22/31 MS: 1 CrossOver- 00:07:05.165 [2024-07-15 13:52:43.047795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.047819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:43.047896] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.047910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.165 #32 NEW cov: 12164 ft: 14875 corp: 21/446b lim: 35 exec/s: 32 rss: 73Mb L: 19/31 MS: 1 CopyPart- 00:07:05.165 [2024-07-15 13:52:43.087929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.087954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:43.088031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.088045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.165 #33 NEW cov: 12164 ft: 14898 corp: 22/465b lim: 35 exec/s: 33 rss: 73Mb L: 19/31 MS: 1 InsertByte- 00:07:05.165 [2024-07-15 13:52:43.128282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.128306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:43.128382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.128396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:43.128457] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.128470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:43.128530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.128543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.165 #34 NEW cov: 12164 ft: 14909 corp: 23/495b lim: 35 exec/s: 34 rss: 73Mb L: 30/31 MS: 1 CopyPart- 00:07:05.165 [2024-07-15 13:52:43.168040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.168064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.165 #35 NEW cov: 12164 ft: 14913 corp: 24/508b lim: 35 exec/s: 35 rss: 73Mb L: 13/31 MS: 1 CMP- DE: "\377\001"- 00:07:05.165 [2024-07-15 13:52:43.208240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.208265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.165 [2024-07-15 13:52:43.208325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.165 [2024-07-15 13:52:43.208342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.424 #36 NEW cov: 12164 ft: 14931 corp: 25/527b lim: 35 exec/s: 36 rss: 73Mb L: 19/31 MS: 1 ShuffleBytes- 00:07:05.424 [2024-07-15 13:52:43.258516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.258540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.424 [2024-07-15 13:52:43.258614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.258628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.424 [2024-07-15 13:52:43.258686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.258700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.424 #37 NEW cov: 12164 ft: 14945 corp: 26/551b lim: 35 exec/s: 37 rss: 73Mb L: 24/31 MS: 1 InsertRepeatedBytes- 00:07:05.424 [2024-07-15 13:52:43.298618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.298642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.424 [2024-07-15 13:52:43.298719] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.298734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.424 [2024-07-15 13:52:43.298792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.298805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.424 #38 NEW cov: 12164 ft: 14980 corp: 27/578b lim: 35 exec/s: 38 rss: 73Mb L: 27/31 MS: 1 ChangeBit- 00:07:05.424 [2024-07-15 13:52:43.338853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.338877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.424 [2024-07-15 13:52:43.338952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.338966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.424 [2024-07-15 13:52:43.339027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.339041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.424 [2024-07-15 13:52:43.339100] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.339113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.424 #39 NEW cov: 12164 ft: 14995 corp: 28/608b lim: 35 exec/s: 39 rss: 73Mb L: 30/31 MS: 1 ChangeBinInt- 00:07:05.424 [2024-07-15 13:52:43.378596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.424 [2024-07-15 13:52:43.378637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.425 #40 NEW cov: 12164 ft: 15017 corp: 29/620b lim: 35 exec/s: 40 rss: 73Mb L: 12/31 MS: 1 InsertByte- 00:07:05.425 [2024-07-15 13:52:43.418827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.425 [2024-07-15 13:52:43.418852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.425 [2024-07-15 13:52:43.418911] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.425 [2024-07-15 13:52:43.418925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.425 #41 NEW cov: 12164 ft: 15043 corp: 30/639b lim: 35 exec/s: 41 rss: 73Mb L: 19/31 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:05.425 [2024-07-15 13:52:43.458979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.425 [2024-07-15 13:52:43.459003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.425 [2024-07-15 13:52:43.459061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.425 [2024-07-15 13:52:43.459075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.425 #42 NEW cov: 12164 ft: 15047 corp: 31/653b lim: 35 exec/s: 42 rss: 73Mb L: 14/31 MS: 1 ChangeBit- 00:07:05.683 [2024-07-15 13:52:43.509360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.683 [2024-07-15 13:52:43.509385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.683 [2024-07-15 13:52:43.509457] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.683 [2024-07-15 13:52:43.509470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.683 [2024-07-15 13:52:43.509528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.683 [2024-07-15 13:52:43.509542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.683 [2024-07-15 13:52:43.509601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.509614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.684 #43 NEW cov: 12164 ft: 15060 corp: 32/683b lim: 35 exec/s: 43 rss: 74Mb L: 30/31 MS: 1 InsertRepeatedBytes- 00:07:05.684 [2024-07-15 13:52:43.559488] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.559512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.559583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.559597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.559656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.559669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.559728] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.559742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.684 #44 NEW cov: 12164 ft: 15071 corp: 33/713b lim: 35 exec/s: 44 rss: 74Mb L: 30/31 MS: 1 CrossOver- 00:07:05.684 [2024-07-15 13:52:43.599356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.599380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.599437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.599450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.684 #45 NEW cov: 12164 ft: 15074 corp: 34/731b lim: 35 exec/s: 45 rss: 74Mb L: 18/31 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:05.684 [2024-07-15 13:52:43.639583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.639617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.639674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.639688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.639741] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.639755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.684 #46 NEW cov: 12164 ft: 15084 corp: 35/755b lim: 35 exec/s: 46 rss: 74Mb L: 24/31 MS: 1 ChangeBinInt- 00:07:05.684 [2024-07-15 13:52:43.689499] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.689525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.684 #47 NEW cov: 12164 ft: 15091 corp: 36/768b lim: 35 exec/s: 47 rss: 74Mb L: 13/31 MS: 1 EraseBytes- 00:07:05.684 [2024-07-15 13:52:43.739842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.739868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.739926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.739940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.684 [2024-07-15 13:52:43.739997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.684 [2024-07-15 13:52:43.740010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.944 #48 NEW cov: 12164 ft: 15099 corp: 37/795b lim: 35 exec/s: 48 rss: 74Mb L: 27/31 MS: 1 ChangeByte- 00:07:05.944 [2024-07-15 13:52:43.779832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.944 [2024-07-15 13:52:43.779857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.944 [2024-07-15 13:52:43.779931] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.944 [2024-07-15 13:52:43.779948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.944 #49 NEW cov: 12164 ft: 15117 corp: 38/815b lim: 35 exec/s: 49 rss: 74Mb L: 20/31 MS: 1 InsertByte- 00:07:05.944 [2024-07-15 13:52:43.830052] ctrlr.c:1834:nvmf_ctrlr_get_features_reservation_notification_mask: *ERROR*: get Features - Invalid Namespace ID 00:07:05.944 [2024-07-15 13:52:43.830294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.944 [2024-07-15 13:52:43.830319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.944 [2024-07-15 13:52:43.830379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000001e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.944 [2024-07-15 13:52:43.830393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.944 [2024-07-15 13:52:43.830448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.944 [2024-07-15 13:52:43.830461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.944 [2024-07-15 13:52:43.830517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE MASK cid:7 cdw10:00000482 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.944 [2024-07-15 13:52:43.830531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.944 NEW_FUNC[1/1]: 0x11e1c40 in nvmf_ctrlr_get_features_reservation_notification_mask /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1818 00:07:05.944 #50 NEW cov: 12187 ft: 15148 corp: 39/845b lim: 35 exec/s: 25 rss: 74Mb L: 30/31 MS: 1 ChangeBinInt- 00:07:05.944 #50 DONE cov: 12187 ft: 15148 corp: 39/845b lim: 35 exec/s: 25 rss: 74Mb 00:07:05.944 ###### Recommended dictionary. ###### 00:07:05.944 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:05.944 "\010\000\000\000" # Uses: 2 00:07:05.944 "\377\001" # Uses: 0 00:07:05.944 ###### End of recommended dictionary. ###### 00:07:05.944 Done 50 runs in 2 second(s) 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:05.944 13:52:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:05.944 13:52:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:05.944 13:52:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:05.944 13:52:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:05.944 13:52:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:05.944 13:52:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:06.203 [2024-07-15 13:52:44.038714] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:06.203 [2024-07-15 13:52:44.038783] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847491 ] 00:07:06.203 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.203 [2024-07-15 13:52:44.251915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.462 [2024-07-15 13:52:44.323045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.462 [2024-07-15 13:52:44.382430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.462 [2024-07-15 13:52:44.398719] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:06.462 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.462 INFO: Seed: 532188132 00:07:06.462 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:06.462 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:06.462 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:06.462 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.462 #2 INITED exec/s: 0 rss: 65Mb 00:07:06.462 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.462 This may also happen if the target rejected all inputs we tried so far 00:07:06.462 [2024-07-15 13:52:44.453843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.462 [2024-07-15 13:52:44.453875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.721 NEW_FUNC[1/696]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:06.721 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.721 #26 NEW cov: 11969 ft: 11970 corp: 2/27b lim: 105 exec/s: 0 rss: 72Mb L: 26/26 MS: 4 ChangeBit-ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:06.980 [2024-07-15 13:52:44.794795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5497558138880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:44.794860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.980 #27 NEW cov: 12099 ft: 12629 corp: 3/54b lim: 105 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 InsertByte- 00:07:06.980 [2024-07-15 13:52:44.854730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:44.854762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.980 #28 NEW cov: 12105 ft: 12983 corp: 4/80b lim: 105 exec/s: 0 rss: 73Mb L: 26/27 MS: 1 ChangeByte- 00:07:06.980 [2024-07-15 13:52:44.895075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:44.895103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.980 [2024-07-15 13:52:44.895145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:44.895167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.980 [2024-07-15 13:52:44.895227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:44.895243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.980 #29 NEW cov: 12190 ft: 13767 corp: 5/163b lim: 105 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:07:06.980 [2024-07-15 13:52:44.934959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:44.934986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.980 #30 NEW cov: 12190 ft: 13822 corp: 6/190b lim: 105 exec/s: 0 rss: 73Mb L: 27/83 MS: 1 InsertByte- 00:07:06.980 [2024-07-15 13:52:44.985094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5497558138880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:44.985122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.980 #31 NEW cov: 12190 ft: 13875 corp: 7/217b lim: 105 exec/s: 0 rss: 73Mb L: 27/83 MS: 1 ChangeBinInt- 00:07:06.980 [2024-07-15 13:52:45.035222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.980 [2024-07-15 13:52:45.035250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.238 #32 NEW cov: 12190 ft: 13926 corp: 8/244b lim: 105 exec/s: 0 rss: 73Mb L: 27/83 MS: 1 InsertByte- 00:07:07.238 [2024-07-15 13:52:45.075588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.075618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.238 [2024-07-15 13:52:45.075654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.075670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.238 [2024-07-15 13:52:45.075724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.075740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.238 #33 NEW cov: 12190 ft: 13958 corp: 9/327b lim: 105 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 CrossOver- 00:07:07.238 [2024-07-15 13:52:45.125453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.125481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.238 #34 NEW cov: 12190 ft: 13994 corp: 10/353b lim: 105 exec/s: 0 rss: 73Mb L: 26/83 MS: 1 ChangeBinInt- 00:07:07.238 [2024-07-15 13:52:45.165578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.165606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.238 #35 NEW cov: 12190 ft: 14066 corp: 11/381b lim: 105 exec/s: 0 rss: 73Mb L: 28/83 MS: 1 InsertByte- 00:07:07.238 [2024-07-15 13:52:45.215798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.215832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.238 [2024-07-15 13:52:45.215904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.215919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.238 #36 NEW cov: 12190 ft: 14348 corp: 12/427b lim: 105 exec/s: 0 rss: 73Mb L: 46/83 MS: 1 CopyPart- 00:07:07.238 [2024-07-15 13:52:45.255800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.255828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.238 #37 NEW cov: 12190 ft: 14371 corp: 13/456b lim: 105 exec/s: 0 rss: 73Mb L: 29/83 MS: 1 InsertByte- 00:07:07.238 [2024-07-15 13:52:45.306080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.306109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.238 [2024-07-15 13:52:45.306158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.238 [2024-07-15 13:52:45.306174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.496 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:07.496 #38 NEW cov: 12213 ft: 14448 corp: 14/502b lim: 105 exec/s: 0 rss: 73Mb L: 46/83 MS: 1 ChangeBit- 00:07:07.496 [2024-07-15 13:52:45.366088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5497558138880 len:49 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.496 [2024-07-15 13:52:45.366115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.496 #39 NEW cov: 12213 ft: 14519 corp: 15/529b lim: 105 exec/s: 0 rss: 73Mb L: 27/83 MS: 1 ChangeByte- 00:07:07.496 [2024-07-15 13:52:45.416251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4294967296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.496 [2024-07-15 13:52:45.416278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.496 #40 NEW cov: 12213 ft: 14571 corp: 16/558b lim: 105 exec/s: 40 rss: 74Mb L: 29/83 MS: 1 ChangeBit- 00:07:07.497 [2024-07-15 13:52:45.466369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12666373951979520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.497 [2024-07-15 13:52:45.466403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.497 #41 NEW cov: 12213 ft: 14578 corp: 17/584b lim: 105 exec/s: 41 rss: 74Mb L: 26/83 MS: 1 ChangeByte- 00:07:07.497 [2024-07-15 13:52:45.506455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.497 [2024-07-15 13:52:45.506482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.497 #42 NEW cov: 12213 ft: 14599 corp: 18/622b lim: 105 exec/s: 42 rss: 74Mb L: 38/83 MS: 1 CopyPart- 00:07:07.497 [2024-07-15 13:52:45.546582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1026497201408 len:61424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.497 [2024-07-15 13:52:45.546610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.755 #43 NEW cov: 12213 ft: 14647 corp: 19/649b lim: 105 exec/s: 43 rss: 74Mb L: 27/83 MS: 1 CrossOver- 00:07:07.755 [2024-07-15 13:52:45.596697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.755 [2024-07-15 13:52:45.596725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.755 #44 NEW cov: 12213 ft: 14649 corp: 20/675b lim: 105 exec/s: 44 rss: 74Mb L: 26/83 MS: 1 ChangeBinInt- 00:07:07.755 [2024-07-15 13:52:45.636826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.755 [2024-07-15 13:52:45.636853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.755 #45 NEW cov: 12213 ft: 14669 corp: 21/713b lim: 105 exec/s: 45 rss: 74Mb L: 38/83 MS: 1 CrossOver- 00:07:07.755 [2024-07-15 13:52:45.687087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:72057594037927936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.755 [2024-07-15 13:52:45.687115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.755 [2024-07-15 13:52:45.687169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4251398048237750948 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.755 [2024-07-15 13:52:45.687184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.755 #46 NEW cov: 12213 ft: 14679 corp: 22/767b lim: 105 exec/s: 46 rss: 74Mb L: 54/83 MS: 1 CopyPart- 00:07:07.756 [2024-07-15 13:52:45.737205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.756 [2024-07-15 13:52:45.737236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.756 [2024-07-15 13:52:45.737274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.756 [2024-07-15 13:52:45.737289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.756 #47 NEW cov: 12213 ft: 14768 corp: 23/813b lim: 105 exec/s: 47 rss: 74Mb L: 46/83 MS: 1 ChangeByte- 00:07:07.756 [2024-07-15 13:52:45.777203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.756 [2024-07-15 13:52:45.777236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.756 #48 NEW cov: 12213 ft: 14782 corp: 24/840b lim: 105 exec/s: 48 rss: 74Mb L: 27/83 MS: 1 ChangeBinInt- 00:07:07.756 [2024-07-15 13:52:45.817330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.756 [2024-07-15 13:52:45.817357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.014 #49 NEW cov: 12213 ft: 14839 corp: 25/867b lim: 105 exec/s: 49 rss: 74Mb L: 27/83 MS: 1 CopyPart- 00:07:08.014 [2024-07-15 13:52:45.857594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.014 [2024-07-15 13:52:45.857625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.014 [2024-07-15 13:52:45.857686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446736381423124479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.014 [2024-07-15 13:52:45.857704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.014 #50 NEW cov: 12213 ft: 14872 corp: 26/913b lim: 105 exec/s: 50 rss: 74Mb L: 46/83 MS: 1 ChangeBinInt- 00:07:08.014 [2024-07-15 13:52:45.897597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5497558138880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.014 [2024-07-15 13:52:45.897624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.014 #51 NEW cov: 12213 ft: 14897 corp: 27/941b lim: 105 exec/s: 51 rss: 74Mb L: 28/83 MS: 1 InsertByte- 00:07:08.014 [2024-07-15 13:52:45.938054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.014 [2024-07-15 13:52:45.938081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.014 [2024-07-15 13:52:45.938152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.014 [2024-07-15 13:52:45.938169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.015 [2024-07-15 13:52:45.938230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.015 [2024-07-15 13:52:45.938245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.015 [2024-07-15 13:52:45.938301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.015 [2024-07-15 13:52:45.938315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.015 #52 NEW cov: 12213 ft: 15398 corp: 28/1030b lim: 105 exec/s: 52 rss: 74Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:07:08.015 [2024-07-15 13:52:45.977794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12666373951979520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.015 [2024-07-15 13:52:45.977821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.015 #53 NEW cov: 12213 ft: 15405 corp: 29/1056b lim: 105 exec/s: 53 rss: 74Mb L: 26/89 MS: 1 ShuffleBytes- 00:07:08.015 [2024-07-15 13:52:46.027961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12666373951979520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.015 [2024-07-15 13:52:46.027988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.015 #54 NEW cov: 12213 ft: 15418 corp: 30/1082b lim: 105 exec/s: 54 rss: 74Mb L: 26/89 MS: 1 ChangeBinInt- 00:07:08.015 [2024-07-15 13:52:46.068363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.015 [2024-07-15 13:52:46.068389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.015 [2024-07-15 13:52:46.068454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.015 [2024-07-15 13:52:46.068471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.015 [2024-07-15 13:52:46.068529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.015 [2024-07-15 13:52:46.068545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.273 #55 NEW cov: 12213 ft: 15465 corp: 31/1147b lim: 105 exec/s: 55 rss: 74Mb L: 65/89 MS: 1 CrossOver- 00:07:08.273 [2024-07-15 13:52:46.108202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.273 [2024-07-15 13:52:46.108236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.273 #56 NEW cov: 12213 ft: 15475 corp: 32/1173b lim: 105 exec/s: 56 rss: 74Mb L: 26/89 MS: 1 ChangeBit- 00:07:08.273 [2024-07-15 13:52:46.158336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9895604667649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.273 [2024-07-15 13:52:46.158364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.273 #57 NEW cov: 12213 ft: 15491 corp: 33/1200b lim: 105 exec/s: 57 rss: 74Mb L: 27/89 MS: 1 ChangeBinInt- 00:07:08.273 [2024-07-15 13:52:46.208710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:72057594037927936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.273 [2024-07-15 13:52:46.208738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.273 [2024-07-15 13:52:46.208785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4251398048237750948 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.273 [2024-07-15 13:52:46.208801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.273 [2024-07-15 13:52:46.208853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6291456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.273 [2024-07-15 13:52:46.208869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.273 #58 NEW cov: 12213 ft: 15504 corp: 34/1277b lim: 105 exec/s: 58 rss: 74Mb L: 77/89 MS: 1 CopyPart- 00:07:08.273 [2024-07-15 13:52:46.258605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4294967296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.273 [2024-07-15 13:52:46.258632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.273 #59 NEW cov: 12213 ft: 15559 corp: 35/1307b lim: 105 exec/s: 59 rss: 74Mb L: 30/89 MS: 1 InsertByte- 00:07:08.273 [2024-07-15 13:52:46.298725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12666760499036160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.273 [2024-07-15 13:52:46.298753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.273 #60 NEW cov: 12213 ft: 15593 corp: 36/1333b lim: 105 exec/s: 60 rss: 74Mb L: 26/89 MS: 1 ChangeByte- 00:07:08.532 [2024-07-15 13:52:46.349278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5497560956928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.532 [2024-07-15 13:52:46.349304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.532 [2024-07-15 13:52:46.349373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.532 [2024-07-15 13:52:46.349389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.532 [2024-07-15 13:52:46.349441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11212726789901884315 len:39836 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.532 [2024-07-15 13:52:46.349456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.532 [2024-07-15 13:52:46.349508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11212726789901884315 len:39836 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.532 [2024-07-15 13:52:46.349523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.532 #65 NEW cov: 12213 ft: 15604 corp: 37/1431b lim: 105 exec/s: 65 rss: 74Mb L: 98/98 MS: 5 EraseBytes-ShuffleBytes-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:07:08.532 [2024-07-15 13:52:46.399005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5497558138880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.532 [2024-07-15 13:52:46.399033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.532 #71 NEW cov: 12213 ft: 15620 corp: 38/1458b lim: 105 exec/s: 71 rss: 74Mb L: 27/98 MS: 1 ChangeBit- 00:07:08.532 [2024-07-15 13:52:46.439104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9895604667649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.532 [2024-07-15 13:52:46.439132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.532 #72 NEW cov: 12213 ft: 15637 corp: 39/1485b lim: 105 exec/s: 36 rss: 74Mb L: 27/98 MS: 1 ChangeByte- 00:07:08.532 #72 DONE cov: 12213 ft: 15637 corp: 39/1485b lim: 105 exec/s: 36 rss: 74Mb 00:07:08.532 Done 72 runs in 2 second(s) 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.791 13:52:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:08.791 [2024-07-15 13:52:46.652551] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:08.791 [2024-07-15 13:52:46.652629] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847782 ] 00:07:08.791 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.050 [2024-07-15 13:52:46.867009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.051 [2024-07-15 13:52:46.938579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.051 [2024-07-15 13:52:46.998464] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.051 [2024-07-15 13:52:47.014748] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:09.051 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.051 INFO: Seed: 3147165370 00:07:09.051 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:09.051 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:09.051 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:09.051 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.051 #2 INITED exec/s: 0 rss: 65Mb 00:07:09.051 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.051 This may also happen if the target rejected all inputs we tried so far 00:07:09.051 [2024-07-15 13:52:47.073718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.051 [2024-07-15 13:52:47.073752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.051 [2024-07-15 13:52:47.073813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.051 [2024-07-15 13:52:47.073829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.619 NEW_FUNC[1/697]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:09.619 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.619 #7 NEW cov: 11990 ft: 11991 corp: 2/62b lim: 120 exec/s: 0 rss: 72Mb L: 61/61 MS: 5 CopyPart-CrossOver-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:09.619 [2024-07-15 13:52:47.446743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.446801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.446902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.446926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.447023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.447044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.447140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.447164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.619 #10 NEW cov: 12120 ft: 13012 corp: 3/166b lim: 120 exec/s: 0 rss: 72Mb L: 104/104 MS: 3 CMP-InsertByte-InsertRepeatedBytes- DE: "\377\022\021\262j$+v"- 00:07:09.619 [2024-07-15 13:52:47.505670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8174439528974610801 len:29042 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.505708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.619 #12 NEW cov: 12126 ft: 14024 corp: 4/209b lim: 120 exec/s: 0 rss: 72Mb L: 43/104 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:09.619 [2024-07-15 13:52:47.556908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.556944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.556999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.557019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.557069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.557084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.557180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.557199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.619 #13 NEW cov: 12211 ft: 14319 corp: 5/318b lim: 120 exec/s: 0 rss: 72Mb L: 109/109 MS: 1 InsertRepeatedBytes- 00:07:09.619 [2024-07-15 13:52:47.616974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.617004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.617066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.617083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.617143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071260078079 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.617158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.617243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.617273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.619 #14 NEW cov: 12211 ft: 14455 corp: 6/427b lim: 120 exec/s: 0 rss: 72Mb L: 109/109 MS: 1 ChangeBinInt- 00:07:09.619 [2024-07-15 13:52:47.677241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.677268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.677340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.677359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.677419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.677435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.619 [2024-07-15 13:52:47.677534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446482467251552255 len:9260 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.619 [2024-07-15 13:52:47.677554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.879 #15 NEW cov: 12211 ft: 14518 corp: 7/544b lim: 120 exec/s: 0 rss: 72Mb L: 117/117 MS: 1 PersAutoDict- DE: "\377\022\021\262j$+v"- 00:07:09.879 [2024-07-15 13:52:47.726824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.726852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.879 [2024-07-15 13:52:47.726930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:266081813921792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.726949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.879 #16 NEW cov: 12211 ft: 14672 corp: 8/605b lim: 120 exec/s: 0 rss: 72Mb L: 61/117 MS: 1 ChangeByte- 00:07:09.879 [2024-07-15 13:52:47.786691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.786719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.879 #18 NEW cov: 12211 ft: 14694 corp: 9/630b lim: 120 exec/s: 0 rss: 72Mb L: 25/117 MS: 2 CrossOver-PersAutoDict- DE: "\377\022\021\262j$+v"- 00:07:09.879 [2024-07-15 13:52:47.837134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.837163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.879 [2024-07-15 13:52:47.837264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:266081813921792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.837281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.879 #19 NEW cov: 12211 ft: 14732 corp: 10/691b lim: 120 exec/s: 0 rss: 72Mb L: 61/117 MS: 1 ShuffleBytes- 00:07:09.879 [2024-07-15 13:52:47.897921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743936270598143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.897954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.879 [2024-07-15 13:52:47.898008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.898026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.879 [2024-07-15 13:52:47.898090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.898109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.879 [2024-07-15 13:52:47.898193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446482467251552255 len:9260 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.879 [2024-07-15 13:52:47.898212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.879 #20 NEW cov: 12211 ft: 14810 corp: 11/808b lim: 120 exec/s: 0 rss: 72Mb L: 117/117 MS: 1 ChangeBit- 00:07:10.138 [2024-07-15 13:52:47.967589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:723399490534375423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.138 [2024-07-15 13:52:47.967621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.138 [2024-07-15 13:52:47.967713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:61953 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.138 [2024-07-15 13:52:47.967729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.138 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:10.138 #21 NEW cov: 12234 ft: 14856 corp: 12/877b lim: 120 exec/s: 0 rss: 72Mb L: 69/117 MS: 1 CrossOver- 00:07:10.138 [2024-07-15 13:52:48.017457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8174320781718810993 len:29042 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.138 [2024-07-15 13:52:48.017489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.138 #22 NEW cov: 12234 ft: 14922 corp: 13/921b lim: 120 exec/s: 22 rss: 73Mb L: 44/117 MS: 1 InsertByte- 00:07:10.138 [2024-07-15 13:52:48.088613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744071377518591 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.088647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.139 [2024-07-15 13:52:48.088724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.088744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.139 [2024-07-15 13:52:48.088786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.088804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.139 [2024-07-15 13:52:48.088903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.088921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.139 #23 NEW cov: 12234 ft: 14954 corp: 14/1026b lim: 120 exec/s: 23 rss: 73Mb L: 105/117 MS: 1 InsertByte- 00:07:10.139 [2024-07-15 13:52:48.138760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743936270598143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.138791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.139 [2024-07-15 13:52:48.138866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.138887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.139 [2024-07-15 13:52:48.138941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18402833977342689279 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.138956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.139 [2024-07-15 13:52:48.139046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446743055802302463 len:27173 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.139066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.139 #24 NEW cov: 12234 ft: 14979 corp: 15/1144b lim: 120 exec/s: 24 rss: 73Mb L: 118/118 MS: 1 InsertByte- 00:07:10.139 [2024-07-15 13:52:48.208422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:723399490534375423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.208454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.139 [2024-07-15 13:52:48.208553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9655717601082343424 len:243 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.139 [2024-07-15 13:52:48.208575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.397 #25 NEW cov: 12234 ft: 15002 corp: 16/1214b lim: 120 exec/s: 25 rss: 73Mb L: 70/118 MS: 1 InsertByte- 00:07:10.397 [2024-07-15 13:52:48.268894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.268923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.268985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.269002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.269077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.269095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.397 #26 NEW cov: 12234 ft: 15324 corp: 17/1309b lim: 120 exec/s: 26 rss: 73Mb L: 95/118 MS: 1 InsertRepeatedBytes- 00:07:10.397 [2024-07-15 13:52:48.318357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8970181430136124028 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.318389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.397 #29 NEW cov: 12234 ft: 15393 corp: 18/1351b lim: 120 exec/s: 29 rss: 73Mb L: 42/118 MS: 3 EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:10.397 [2024-07-15 13:52:48.379562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743936270598143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.379589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.379659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:10752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.379679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.379738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.379754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.379840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446743055802302463 len:27173 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.379861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.397 #30 NEW cov: 12234 ft: 15405 corp: 19/1469b lim: 120 exec/s: 30 rss: 73Mb L: 118/118 MS: 1 InsertByte- 00:07:10.397 [2024-07-15 13:52:48.429693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.429721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.429780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.429801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.429867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.429885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.397 [2024-07-15 13:52:48.429975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:45675 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.397 [2024-07-15 13:52:48.429994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.397 #31 NEW cov: 12234 ft: 15423 corp: 20/1588b lim: 120 exec/s: 31 rss: 73Mb L: 119/119 MS: 1 CopyPart- 00:07:10.656 [2024-07-15 13:52:48.479322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:723399490534375423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.479351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.479445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9655717601082343424 len:243 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.479461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.657 #32 NEW cov: 12234 ft: 15451 corp: 21/1658b lim: 120 exec/s: 32 rss: 73Mb L: 70/119 MS: 1 ChangeByte- 00:07:10.657 [2024-07-15 13:52:48.540026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.540054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.540122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.540140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.540227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.540248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.540346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:45675 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.540362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.657 #33 NEW cov: 12234 ft: 15468 corp: 22/1777b lim: 120 exec/s: 33 rss: 73Mb L: 119/119 MS: 1 ShuffleBytes- 00:07:10.657 [2024-07-15 13:52:48.610048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.610076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.610143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.610165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.610221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071260078079 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.610238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.657 #34 NEW cov: 12234 ft: 15490 corp: 23/1864b lim: 120 exec/s: 34 rss: 73Mb L: 87/119 MS: 1 EraseBytes- 00:07:10.657 [2024-07-15 13:52:48.669854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.669884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.669973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:266081813921792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.669987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.657 #35 NEW cov: 12234 ft: 15556 corp: 24/1925b lim: 120 exec/s: 35 rss: 73Mb L: 61/119 MS: 1 ChangeByte- 00:07:10.657 [2024-07-15 13:52:48.720005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.720035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.657 [2024-07-15 13:52:48.720118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.657 [2024-07-15 13:52:48.720133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.916 #36 NEW cov: 12234 ft: 15569 corp: 25/1986b lim: 120 exec/s: 36 rss: 73Mb L: 61/119 MS: 1 ChangeByte- 00:07:10.916 [2024-07-15 13:52:48.770905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.770934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.916 [2024-07-15 13:52:48.771018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.771035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.916 [2024-07-15 13:52:48.771111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.771128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.916 [2024-07-15 13:52:48.771214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:17437937757178560512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.771236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.916 #37 NEW cov: 12234 ft: 15596 corp: 26/2093b lim: 120 exec/s: 37 rss: 73Mb L: 107/119 MS: 1 CopyPart- 00:07:10.916 [2024-07-15 13:52:48.831442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.831470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.916 [2024-07-15 13:52:48.831567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.831587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.916 [2024-07-15 13:52:48.831667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.831688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.916 [2024-07-15 13:52:48.831777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:4531 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.831794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.916 [2024-07-15 13:52:48.831879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:4294967040 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.831897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:10.916 #43 NEW cov: 12234 ft: 15648 corp: 27/2213b lim: 120 exec/s: 43 rss: 73Mb L: 120/120 MS: 1 InsertByte- 00:07:10.916 [2024-07-15 13:52:48.880295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8970181430136124028 len:31869 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.916 [2024-07-15 13:52:48.880324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.917 #44 NEW cov: 12234 ft: 15690 corp: 28/2255b lim: 120 exec/s: 44 rss: 74Mb L: 42/120 MS: 1 ChangeBit- 00:07:10.917 [2024-07-15 13:52:48.941468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.917 [2024-07-15 13:52:48.941497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.917 [2024-07-15 13:52:48.941570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.917 [2024-07-15 13:52:48.941590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.917 [2024-07-15 13:52:48.941659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.917 [2024-07-15 13:52:48.941676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.917 [2024-07-15 13:52:48.941763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.917 [2024-07-15 13:52:48.941779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.917 #45 NEW cov: 12234 ft: 15715 corp: 29/2358b lim: 120 exec/s: 45 rss: 74Mb L: 103/120 MS: 1 EraseBytes- 00:07:11.175 [2024-07-15 13:52:49.001028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.175 [2024-07-15 13:52:49.001058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.175 [2024-07-15 13:52:49.001132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:266081813921792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.176 [2024-07-15 13:52:49.001151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.176 #46 NEW cov: 12234 ft: 15728 corp: 30/2420b lim: 120 exec/s: 46 rss: 74Mb L: 62/120 MS: 1 InsertByte- 00:07:11.176 [2024-07-15 13:52:49.051237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168429568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.176 [2024-07-15 13:52:49.051268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.176 [2024-07-15 13:52:49.051340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:266081813921792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.176 [2024-07-15 13:52:49.051356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.176 #47 NEW cov: 12234 ft: 15742 corp: 31/2481b lim: 120 exec/s: 23 rss: 74Mb L: 61/120 MS: 1 CMP- DE: "\001\023\021\263F\177&\014"- 00:07:11.176 #47 DONE cov: 12234 ft: 15742 corp: 31/2481b lim: 120 exec/s: 23 rss: 74Mb 00:07:11.176 ###### Recommended dictionary. ###### 00:07:11.176 "\377\022\021\262j$+v" # Uses: 2 00:07:11.176 "\001\023\021\263F\177&\014" # Uses: 0 00:07:11.176 ###### End of recommended dictionary. ###### 00:07:11.176 Done 47 runs in 2 second(s) 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.176 13:52:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:11.434 [2024-07-15 13:52:49.257463] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:11.434 [2024-07-15 13:52:49.257543] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848086 ] 00:07:11.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.434 [2024-07-15 13:52:49.471871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.692 [2024-07-15 13:52:49.543175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.692 [2024-07-15 13:52:49.602920] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.692 [2024-07-15 13:52:49.619207] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:11.692 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.692 INFO: Seed: 1458193453 00:07:11.692 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:11.692 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:11.692 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.692 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.692 #2 INITED exec/s: 0 rss: 65Mb 00:07:11.692 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.692 This may also happen if the target rejected all inputs we tried so far 00:07:11.692 [2024-07-15 13:52:49.673991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:11.692 [2024-07-15 13:52:49.674025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.692 [2024-07-15 13:52:49.674059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:11.692 [2024-07-15 13:52:49.674076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.692 [2024-07-15 13:52:49.674105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:11.692 [2024-07-15 13:52:49.674120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.692 [2024-07-15 13:52:49.674148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:11.692 [2024-07-15 13:52:49.674162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.259 NEW_FUNC[1/695]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:12.259 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.259 #26 NEW cov: 11933 ft: 11934 corp: 2/89b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 4 CMP-InsertByte-InsertByte-InsertRepeatedBytes- DE: "\377\377\377\377\377\377\377\377"- 00:07:12.259 [2024-07-15 13:52:50.045049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.259 [2024-07-15 13:52:50.045108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.045149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.259 [2024-07-15 13:52:50.045168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.045201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.259 [2024-07-15 13:52:50.045226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.045259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.259 [2024-07-15 13:52:50.045276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.259 #27 NEW cov: 12063 ft: 12481 corp: 3/177b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 ChangeByte- 00:07:12.259 [2024-07-15 13:52:50.124951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.259 [2024-07-15 13:52:50.124988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.125036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.259 [2024-07-15 13:52:50.125052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.125085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.259 [2024-07-15 13:52:50.125099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.125127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.259 [2024-07-15 13:52:50.125141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.259 #28 NEW cov: 12069 ft: 12693 corp: 4/265b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:12.259 [2024-07-15 13:52:50.205154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.259 [2024-07-15 13:52:50.205187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.205240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.259 [2024-07-15 13:52:50.205256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.205285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.259 [2024-07-15 13:52:50.205300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.205327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.259 [2024-07-15 13:52:50.205342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.259 #34 NEW cov: 12154 ft: 13030 corp: 5/353b lim: 100 exec/s: 0 rss: 73Mb L: 88/88 MS: 1 ChangeBit- 00:07:12.259 [2024-07-15 13:52:50.255274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.259 [2024-07-15 13:52:50.255303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.255351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.259 [2024-07-15 13:52:50.255367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.255396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.259 [2024-07-15 13:52:50.255411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.255439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.259 [2024-07-15 13:52:50.255453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.259 #35 NEW cov: 12154 ft: 13127 corp: 6/442b lim: 100 exec/s: 0 rss: 73Mb L: 89/89 MS: 1 InsertByte- 00:07:12.259 [2024-07-15 13:52:50.305388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.259 [2024-07-15 13:52:50.305416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.305463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.259 [2024-07-15 13:52:50.305479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.305508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.259 [2024-07-15 13:52:50.305523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.259 [2024-07-15 13:52:50.305555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.259 [2024-07-15 13:52:50.305570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.518 #36 NEW cov: 12154 ft: 13253 corp: 7/531b lim: 100 exec/s: 0 rss: 73Mb L: 89/89 MS: 1 ChangeBinInt- 00:07:12.518 [2024-07-15 13:52:50.385594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.518 [2024-07-15 13:52:50.385622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.385668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.518 [2024-07-15 13:52:50.385684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.385714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.518 [2024-07-15 13:52:50.385729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.385756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.518 [2024-07-15 13:52:50.385770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.518 #37 NEW cov: 12154 ft: 13366 corp: 8/619b lim: 100 exec/s: 0 rss: 73Mb L: 88/89 MS: 1 ShuffleBytes- 00:07:12.518 [2024-07-15 13:52:50.435573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.518 [2024-07-15 13:52:50.435602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.518 #39 NEW cov: 12154 ft: 13815 corp: 9/648b lim: 100 exec/s: 0 rss: 73Mb L: 29/89 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:12.518 [2024-07-15 13:52:50.495855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.518 [2024-07-15 13:52:50.495883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.495930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.518 [2024-07-15 13:52:50.495946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.495975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.518 [2024-07-15 13:52:50.495990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.496017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.518 [2024-07-15 13:52:50.496031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.518 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:12.518 #40 NEW cov: 12171 ft: 13839 corp: 10/737b lim: 100 exec/s: 0 rss: 73Mb L: 89/89 MS: 1 InsertByte- 00:07:12.518 [2024-07-15 13:52:50.576047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.518 [2024-07-15 13:52:50.576078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.576126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.518 [2024-07-15 13:52:50.576143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.576171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.518 [2024-07-15 13:52:50.576190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.518 [2024-07-15 13:52:50.576226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.518 [2024-07-15 13:52:50.576241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.777 #41 NEW cov: 12171 ft: 13868 corp: 11/826b lim: 100 exec/s: 0 rss: 73Mb L: 89/89 MS: 1 ShuffleBytes- 00:07:12.777 [2024-07-15 13:52:50.626212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.777 [2024-07-15 13:52:50.626249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.626295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.777 [2024-07-15 13:52:50.626311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.626341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.777 [2024-07-15 13:52:50.626355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.626383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.777 [2024-07-15 13:52:50.626397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.777 #42 NEW cov: 12171 ft: 13911 corp: 12/917b lim: 100 exec/s: 42 rss: 73Mb L: 91/91 MS: 1 CMP- DE: "\001\001"- 00:07:12.777 [2024-07-15 13:52:50.706403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.777 [2024-07-15 13:52:50.706432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.706479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.777 [2024-07-15 13:52:50.706494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.706523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.777 [2024-07-15 13:52:50.706538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.706565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.777 [2024-07-15 13:52:50.706580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.777 #43 NEW cov: 12171 ft: 13929 corp: 13/1006b lim: 100 exec/s: 43 rss: 73Mb L: 89/91 MS: 1 ChangeBit- 00:07:12.777 [2024-07-15 13:52:50.786630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.777 [2024-07-15 13:52:50.786659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.786705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.777 [2024-07-15 13:52:50.786721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.786750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.777 [2024-07-15 13:52:50.786764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.786792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.777 [2024-07-15 13:52:50.786810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.777 #44 NEW cov: 12171 ft: 13953 corp: 14/1094b lim: 100 exec/s: 44 rss: 73Mb L: 88/91 MS: 1 ChangeByte- 00:07:12.777 [2024-07-15 13:52:50.836769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.777 [2024-07-15 13:52:50.836798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.836845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.777 [2024-07-15 13:52:50.836860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.836889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.777 [2024-07-15 13:52:50.836904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.777 [2024-07-15 13:52:50.836932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.777 [2024-07-15 13:52:50.836946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.036 #45 NEW cov: 12171 ft: 13981 corp: 15/1187b lim: 100 exec/s: 45 rss: 73Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:07:13.036 [2024-07-15 13:52:50.896886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.036 [2024-07-15 13:52:50.896915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:50.896961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.036 [2024-07-15 13:52:50.896977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:50.897006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.036 [2024-07-15 13:52:50.897021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:50.897048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.036 [2024-07-15 13:52:50.897062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.036 #46 NEW cov: 12171 ft: 14024 corp: 16/1278b lim: 100 exec/s: 46 rss: 73Mb L: 91/93 MS: 1 CopyPart- 00:07:13.036 [2024-07-15 13:52:50.977077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.036 [2024-07-15 13:52:50.977107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:50.977155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.036 [2024-07-15 13:52:50.977171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:50.977201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.036 [2024-07-15 13:52:50.977216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.036 #47 NEW cov: 12171 ft: 14298 corp: 17/1345b lim: 100 exec/s: 47 rss: 73Mb L: 67/93 MS: 1 EraseBytes- 00:07:13.036 [2024-07-15 13:52:51.027229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.036 [2024-07-15 13:52:51.027257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:51.027310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.036 [2024-07-15 13:52:51.027326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:51.027354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.036 [2024-07-15 13:52:51.027368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:51.027396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.036 [2024-07-15 13:52:51.027410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.036 #48 NEW cov: 12171 ft: 14376 corp: 18/1439b lim: 100 exec/s: 48 rss: 73Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:07:13.036 [2024-07-15 13:52:51.077303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.036 [2024-07-15 13:52:51.077333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.036 [2024-07-15 13:52:51.077381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.036 [2024-07-15 13:52:51.077397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.295 #49 NEW cov: 12171 ft: 14625 corp: 19/1490b lim: 100 exec/s: 49 rss: 74Mb L: 51/94 MS: 1 EraseBytes- 00:07:13.295 [2024-07-15 13:52:51.157596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.295 [2024-07-15 13:52:51.157624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.157656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.295 [2024-07-15 13:52:51.157672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.157701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.295 [2024-07-15 13:52:51.157716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.157743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.295 [2024-07-15 13:52:51.157757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.295 #50 NEW cov: 12171 ft: 14671 corp: 20/1584b lim: 100 exec/s: 50 rss: 74Mb L: 94/94 MS: 1 ChangeBinInt- 00:07:13.295 [2024-07-15 13:52:51.237759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.295 [2024-07-15 13:52:51.237787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.237833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.295 [2024-07-15 13:52:51.237849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.237878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.295 [2024-07-15 13:52:51.237893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.237921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.295 [2024-07-15 13:52:51.237935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.295 #51 NEW cov: 12171 ft: 14692 corp: 21/1672b lim: 100 exec/s: 51 rss: 74Mb L: 88/94 MS: 1 CopyPart- 00:07:13.295 [2024-07-15 13:52:51.298770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.295 [2024-07-15 13:52:51.298804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.298866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.295 [2024-07-15 13:52:51.298886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.298945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.295 [2024-07-15 13:52:51.298963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.299023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.295 [2024-07-15 13:52:51.299040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.295 #52 NEW cov: 12171 ft: 14756 corp: 22/1763b lim: 100 exec/s: 52 rss: 74Mb L: 91/94 MS: 1 ChangeByte- 00:07:13.295 [2024-07-15 13:52:51.339013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.295 [2024-07-15 13:52:51.339085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.339197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.295 [2024-07-15 13:52:51.339250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.295 [2024-07-15 13:52:51.339358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.295 [2024-07-15 13:52:51.339399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.553 #53 NEW cov: 12171 ft: 14915 corp: 23/1830b lim: 100 exec/s: 53 rss: 74Mb L: 67/94 MS: 1 CopyPart- 00:07:13.553 [2024-07-15 13:52:51.409069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.553 [2024-07-15 13:52:51.409096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.409143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.553 [2024-07-15 13:52:51.409157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.409230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.553 [2024-07-15 13:52:51.409245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.409300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.553 [2024-07-15 13:52:51.409315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.553 #54 NEW cov: 12171 ft: 14987 corp: 24/1928b lim: 100 exec/s: 54 rss: 74Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:13.553 [2024-07-15 13:52:51.459196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.553 [2024-07-15 13:52:51.459225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.459280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.553 [2024-07-15 13:52:51.459294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.459351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.553 [2024-07-15 13:52:51.459365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.459419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.553 [2024-07-15 13:52:51.459433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.553 #55 NEW cov: 12171 ft: 15022 corp: 25/2019b lim: 100 exec/s: 55 rss: 74Mb L: 91/98 MS: 1 ChangeBinInt- 00:07:13.553 [2024-07-15 13:52:51.509005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.553 [2024-07-15 13:52:51.509030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.553 #56 NEW cov: 12171 ft: 15078 corp: 26/2048b lim: 100 exec/s: 56 rss: 74Mb L: 29/98 MS: 1 ChangeBit- 00:07:13.553 [2024-07-15 13:52:51.559398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.553 [2024-07-15 13:52:51.559423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.559459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.553 [2024-07-15 13:52:51.559473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.559527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.553 [2024-07-15 13:52:51.559541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.553 #57 NEW cov: 12178 ft: 15159 corp: 27/2127b lim: 100 exec/s: 57 rss: 74Mb L: 79/98 MS: 1 InsertRepeatedBytes- 00:07:13.553 [2024-07-15 13:52:51.609521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.553 [2024-07-15 13:52:51.609546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.609582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.553 [2024-07-15 13:52:51.609596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.553 [2024-07-15 13:52:51.609651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.553 [2024-07-15 13:52:51.609664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.812 #58 NEW cov: 12178 ft: 15172 corp: 28/2206b lim: 100 exec/s: 58 rss: 74Mb L: 79/98 MS: 1 ChangeBit- 00:07:13.812 [2024-07-15 13:52:51.659417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.812 [2024-07-15 13:52:51.659443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.812 #59 NEW cov: 12178 ft: 15197 corp: 29/2244b lim: 100 exec/s: 29 rss: 74Mb L: 38/98 MS: 1 CopyPart- 00:07:13.812 #59 DONE cov: 12178 ft: 15197 corp: 29/2244b lim: 100 exec/s: 29 rss: 74Mb 00:07:13.812 ###### Recommended dictionary. ###### 00:07:13.812 "\377\377\377\377\377\377\377\377" # Uses: 1 00:07:13.812 "\001\001" # Uses: 0 00:07:13.812 ###### End of recommended dictionary. ###### 00:07:13.812 Done 59 runs in 2 second(s) 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.812 13:52:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:13.812 [2024-07-15 13:52:51.875028] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:13.812 [2024-07-15 13:52:51.875096] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848440 ] 00:07:14.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.070 [2024-07-15 13:52:52.092284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.329 [2024-07-15 13:52:52.163128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.329 [2024-07-15 13:52:52.222615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.329 [2024-07-15 13:52:52.238915] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:14.329 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.329 INFO: Seed: 4078203128 00:07:14.329 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:14.329 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:14.329 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:14.329 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.329 #2 INITED exec/s: 0 rss: 64Mb 00:07:14.329 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.329 This may also happen if the target rejected all inputs we tried so far 00:07:14.329 [2024-07-15 13:52:52.283494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:44006637568 len:11 00:07:14.329 [2024-07-15 13:52:52.283528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.588 NEW_FUNC[1/692]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:14.588 NEW_FUNC[2/692]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.588 #4 NEW cov: 11870 ft: 11912 corp: 2/11b lim: 50 exec/s: 0 rss: 72Mb L: 10/10 MS: 2 CMP-CrossOver- DE: "?\000\000\000\000\000\000\000"- 00:07:14.588 [2024-07-15 13:52:52.656557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460047304291444 len:29813 00:07:14.588 [2024-07-15 13:52:52.656611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.588 [2024-07-15 13:52:52.656699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8391460049216894068 len:29813 00:07:14.588 [2024-07-15 13:52:52.656719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.588 [2024-07-15 13:52:52.656806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8391460049216894068 len:29813 00:07:14.588 [2024-07-15 13:52:52.656825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.846 NEW_FUNC[1/3]: 0x1a781f0 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:546 00:07:14.846 NEW_FUNC[2/3]: 0x1a79780 in reactor_post_process_lw_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:868 00:07:14.846 #7 NEW cov: 12041 ft: 12921 corp: 3/45b lim: 50 exec/s: 0 rss: 72Mb L: 34/34 MS: 3 CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:07:14.846 [2024-07-15 13:52:52.706122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:44006637568 len:52993 00:07:14.846 [2024-07-15 13:52:52.706150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.846 #8 NEW cov: 12047 ft: 13223 corp: 4/56b lim: 50 exec/s: 0 rss: 72Mb L: 11/34 MS: 1 InsertByte- 00:07:14.846 [2024-07-15 13:52:52.766373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:44006637568 len:52993 00:07:14.846 [2024-07-15 13:52:52.766401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.846 #9 NEW cov: 12132 ft: 13516 corp: 5/67b lim: 50 exec/s: 0 rss: 72Mb L: 11/34 MS: 1 ShuffleBytes- 00:07:14.846 [2024-07-15 13:52:52.826522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171900971 len:1 00:07:14.846 [2024-07-15 13:52:52.826550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.846 #12 NEW cov: 12132 ft: 13577 corp: 6/77b lim: 50 exec/s: 0 rss: 72Mb L: 10/34 MS: 3 PersAutoDict-PersAutoDict-InsertByte- DE: "?\000\000\000\000\000\000\000"-"?\000\000\000\000\000\000\000"- 00:07:14.847 [2024-07-15 13:52:52.876658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460047438509172 len:29813 00:07:14.847 [2024-07-15 13:52:52.876687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.847 #13 NEW cov: 12132 ft: 13633 corp: 7/92b lim: 50 exec/s: 0 rss: 72Mb L: 15/34 MS: 1 CrossOver- 00:07:15.106 [2024-07-15 13:52:52.926829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460048320069642 len:29813 00:07:15.106 [2024-07-15 13:52:52.926858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.106 #19 NEW cov: 12132 ft: 13689 corp: 8/104b lim: 50 exec/s: 0 rss: 72Mb L: 12/34 MS: 1 CrossOver- 00:07:15.106 [2024-07-15 13:52:52.977682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460048320069642 len:65536 00:07:15.106 [2024-07-15 13:52:52.977710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.106 [2024-07-15 13:52:52.977779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:15.106 [2024-07-15 13:52:52.977800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.106 [2024-07-15 13:52:52.977859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:15.106 [2024-07-15 13:52:52.977874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.106 [2024-07-15 13:52:52.977952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65397 00:07:15.106 [2024-07-15 13:52:52.977970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.106 #20 NEW cov: 12132 ft: 13976 corp: 9/147b lim: 50 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:07:15.106 [2024-07-15 13:52:53.037206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171917056 len:1 00:07:15.106 [2024-07-15 13:52:53.037243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.106 #21 NEW cov: 12132 ft: 14047 corp: 10/165b lim: 50 exec/s: 0 rss: 72Mb L: 18/43 MS: 1 PersAutoDict- DE: "?\000\000\000\000\000\000\000"- 00:07:15.106 [2024-07-15 13:52:53.097573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171917056 len:1 00:07:15.106 [2024-07-15 13:52:53.097601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.106 [2024-07-15 13:52:53.097665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2818048 len:11009 00:07:15.106 [2024-07-15 13:52:53.097681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.106 #22 NEW cov: 12132 ft: 14280 corp: 11/194b lim: 50 exec/s: 0 rss: 72Mb L: 29/43 MS: 1 CopyPart- 00:07:15.106 [2024-07-15 13:52:53.157578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1056964608 len:1 00:07:15.106 [2024-07-15 13:52:53.157606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.364 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:15.364 #23 NEW cov: 12155 ft: 14321 corp: 12/210b lim: 50 exec/s: 0 rss: 72Mb L: 16/43 MS: 1 CrossOver- 00:07:15.364 [2024-07-15 13:52:53.207794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171900971 len:1 00:07:15.364 [2024-07-15 13:52:53.207821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.364 #24 NEW cov: 12155 ft: 14424 corp: 13/220b lim: 50 exec/s: 0 rss: 72Mb L: 10/43 MS: 1 ShuffleBytes- 00:07:15.364 [2024-07-15 13:52:53.257937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:35184543989803 len:1 00:07:15.364 [2024-07-15 13:52:53.257964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.364 #25 NEW cov: 12155 ft: 14463 corp: 14/230b lim: 50 exec/s: 25 rss: 72Mb L: 10/43 MS: 1 ChangeBit- 00:07:15.364 [2024-07-15 13:52:53.318373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1085102886323425039 len:21332 00:07:15.364 [2024-07-15 13:52:53.318401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.364 [2024-07-15 13:52:53.318461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6004234345560363859 len:21332 00:07:15.364 [2024-07-15 13:52:53.318477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.364 #29 NEW cov: 12155 ft: 14491 corp: 15/257b lim: 50 exec/s: 29 rss: 72Mb L: 27/43 MS: 4 CrossOver-ShuffleBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:15.364 [2024-07-15 13:52:53.369210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5148868316613312522 len:29952 00:07:15.364 [2024-07-15 13:52:53.369245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.364 [2024-07-15 13:52:53.369314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:15.364 [2024-07-15 13:52:53.369334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.364 [2024-07-15 13:52:53.369395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:15.364 [2024-07-15 13:52:53.369412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.364 [2024-07-15 13:52:53.369502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:15.364 [2024-07-15 13:52:53.369524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.364 #30 NEW cov: 12155 ft: 14507 corp: 16/301b lim: 50 exec/s: 30 rss: 73Mb L: 44/44 MS: 1 InsertByte- 00:07:15.623 [2024-07-15 13:52:53.438615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171917056 len:1 00:07:15.623 [2024-07-15 13:52:53.438645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.623 #31 NEW cov: 12155 ft: 14542 corp: 17/319b lim: 50 exec/s: 31 rss: 73Mb L: 18/44 MS: 1 PersAutoDict- DE: "?\000\000\000\000\000\000\000"- 00:07:15.623 [2024-07-15 13:52:53.489325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460047438509172 len:29813 00:07:15.623 [2024-07-15 13:52:53.489357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.623 [2024-07-15 13:52:53.489413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:15.623 [2024-07-15 13:52:53.489433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.623 [2024-07-15 13:52:53.489504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:29813 00:07:15.623 [2024-07-15 13:52:53.489523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.623 #37 NEW cov: 12155 ft: 14563 corp: 18/352b lim: 50 exec/s: 37 rss: 73Mb L: 33/44 MS: 1 InsertRepeatedBytes- 00:07:15.623 [2024-07-15 13:52:53.558972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:35184543997995 len:1 00:07:15.623 [2024-07-15 13:52:53.559002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.623 #39 NEW cov: 12155 ft: 14569 corp: 19/362b lim: 50 exec/s: 39 rss: 73Mb L: 10/44 MS: 2 EraseBytes-InsertByte- 00:07:15.623 [2024-07-15 13:52:53.619924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1056964608 len:1 00:07:15.623 [2024-07-15 13:52:53.619954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.623 [2024-07-15 13:52:53.620021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:15.623 [2024-07-15 13:52:53.620039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.623 [2024-07-15 13:52:53.620102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:15.623 [2024-07-15 13:52:53.620123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.623 [2024-07-15 13:52:53.620212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:15.623 [2024-07-15 13:52:53.620249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.623 #40 NEW cov: 12155 ft: 14635 corp: 20/407b lim: 50 exec/s: 40 rss: 73Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:07:15.623 [2024-07-15 13:52:53.669362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1056964608 len:29813 00:07:15.623 [2024-07-15 13:52:53.669394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.623 #41 NEW cov: 12155 ft: 14658 corp: 21/422b lim: 50 exec/s: 41 rss: 73Mb L: 15/45 MS: 1 PersAutoDict- DE: "?\000\000\000\000\000\000\000"- 00:07:15.883 [2024-07-15 13:52:53.720087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:170278719 len:1 00:07:15.883 [2024-07-15 13:52:53.720118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.883 [2024-07-15 13:52:53.720188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11008 len:44 00:07:15.883 [2024-07-15 13:52:53.720206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.883 [2024-07-15 13:52:53.720274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:15.883 [2024-07-15 13:52:53.720293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.883 #42 NEW cov: 12155 ft: 14661 corp: 22/452b lim: 50 exec/s: 42 rss: 73Mb L: 30/45 MS: 1 InsertByte- 00:07:15.883 [2024-07-15 13:52:53.789876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171917056 len:1 00:07:15.883 [2024-07-15 13:52:53.789905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.883 #43 NEW cov: 12155 ft: 14707 corp: 23/470b lim: 50 exec/s: 43 rss: 73Mb L: 18/45 MS: 1 ChangeBinInt- 00:07:15.883 [2024-07-15 13:52:53.850041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171917056 len:7937 00:07:15.883 [2024-07-15 13:52:53.850070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.883 #44 NEW cov: 12155 ft: 14747 corp: 24/489b lim: 50 exec/s: 44 rss: 73Mb L: 19/45 MS: 1 InsertByte- 00:07:15.883 [2024-07-15 13:52:53.900149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460049216894068 len:29813 00:07:15.883 [2024-07-15 13:52:53.900176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.883 #46 NEW cov: 12155 ft: 14758 corp: 25/507b lim: 50 exec/s: 46 rss: 73Mb L: 18/45 MS: 2 EraseBytes-CopyPart- 00:07:15.883 [2024-07-15 13:52:53.950334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460049216894068 len:11637 00:07:15.884 [2024-07-15 13:52:53.950363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.209 #47 NEW cov: 12155 ft: 14800 corp: 26/526b lim: 50 exec/s: 47 rss: 73Mb L: 19/45 MS: 1 InsertByte- 00:07:16.209 [2024-07-15 13:52:54.010562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:44006637568 len:1 00:07:16.209 [2024-07-15 13:52:54.010590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.209 #48 NEW cov: 12155 ft: 14801 corp: 27/537b lim: 50 exec/s: 48 rss: 73Mb L: 11/45 MS: 1 CopyPart- 00:07:16.209 [2024-07-15 13:52:54.060722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2522015791499378731 len:1 00:07:16.209 [2024-07-15 13:52:54.060750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.209 #49 NEW cov: 12155 ft: 14812 corp: 28/547b lim: 50 exec/s: 49 rss: 73Mb L: 10/45 MS: 1 ChangeByte- 00:07:16.209 [2024-07-15 13:52:54.110937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:171900971 len:10753 00:07:16.209 [2024-07-15 13:52:54.110965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.209 #50 NEW cov: 12155 ft: 14818 corp: 29/557b lim: 50 exec/s: 50 rss: 73Mb L: 10/45 MS: 1 ChangeByte- 00:07:16.209 [2024-07-15 13:52:54.161506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8391460047438509172 len:29813 00:07:16.209 [2024-07-15 13:52:54.161534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.209 [2024-07-15 13:52:54.161618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:536870912 len:1 00:07:16.209 [2024-07-15 13:52:54.161638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.209 [2024-07-15 13:52:54.161721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:29813 00:07:16.209 [2024-07-15 13:52:54.161736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.209 #51 NEW cov: 12155 ft: 14820 corp: 30/590b lim: 50 exec/s: 51 rss: 73Mb L: 33/45 MS: 1 ChangeBit- 00:07:16.209 [2024-07-15 13:52:54.221958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8425934659001807988 len:61167 00:07:16.209 [2024-07-15 13:52:54.221985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.209 [2024-07-15 13:52:54.222056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 00:07:16.209 [2024-07-15 13:52:54.222074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.209 [2024-07-15 13:52:54.222152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17216961135462248174 len:61167 00:07:16.209 [2024-07-15 13:52:54.222168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.209 [2024-07-15 13:52:54.222265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8391460051271741166 len:29813 00:07:16.209 [2024-07-15 13:52:54.222282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.209 #52 NEW cov: 12155 ft: 14828 corp: 31/634b lim: 50 exec/s: 52 rss: 73Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:07:16.468 [2024-07-15 13:52:54.271376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:44006637568 len:52993 00:07:16.468 [2024-07-15 13:52:54.271405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.468 #53 NEW cov: 12155 ft: 14855 corp: 32/645b lim: 50 exec/s: 26 rss: 73Mb L: 11/45 MS: 1 ShuffleBytes- 00:07:16.468 #53 DONE cov: 12155 ft: 14855 corp: 32/645b lim: 50 exec/s: 26 rss: 73Mb 00:07:16.468 ###### Recommended dictionary. ###### 00:07:16.468 "?\000\000\000\000\000\000\000" # Uses: 5 00:07:16.468 ###### End of recommended dictionary. ###### 00:07:16.468 Done 53 runs in 2 second(s) 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.468 13:52:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:16.468 [2024-07-15 13:52:54.463575] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:16.468 [2024-07-15 13:52:54.463660] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848814 ] 00:07:16.468 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.725 [2024-07-15 13:52:54.674583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.725 [2024-07-15 13:52:54.744621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.983 [2024-07-15 13:52:54.804923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.983 [2024-07-15 13:52:54.821190] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:16.983 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.983 INFO: Seed: 2363254873 00:07:16.983 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:16.983 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:16.983 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.983 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.983 #2 INITED exec/s: 0 rss: 64Mb 00:07:16.983 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.983 This may also happen if the target rejected all inputs we tried so far 00:07:16.983 [2024-07-15 13:52:54.891808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:16.983 [2024-07-15 13:52:54.891854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.983 [2024-07-15 13:52:54.891918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:16.983 [2024-07-15 13:52:54.891939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.983 [2024-07-15 13:52:54.892040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:16.983 [2024-07-15 13:52:54.892058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.242 NEW_FUNC[1/697]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:17.242 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.242 #10 NEW cov: 11969 ft: 11970 corp: 2/56b lim: 90 exec/s: 0 rss: 72Mb L: 55/55 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:17.242 [2024-07-15 13:52:55.242296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.242 [2024-07-15 13:52:55.242361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.242 [2024-07-15 13:52:55.242466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.242 [2024-07-15 13:52:55.242489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.242 #16 NEW cov: 12099 ft: 12957 corp: 3/98b lim: 90 exec/s: 0 rss: 72Mb L: 42/55 MS: 1 EraseBytes- 00:07:17.242 [2024-07-15 13:52:55.312723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.242 [2024-07-15 13:52:55.312754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.242 [2024-07-15 13:52:55.312826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.242 [2024-07-15 13:52:55.312843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.242 [2024-07-15 13:52:55.312936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.242 [2024-07-15 13:52:55.312953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.501 #17 NEW cov: 12105 ft: 13238 corp: 4/154b lim: 90 exec/s: 0 rss: 72Mb L: 56/56 MS: 1 CrossOver- 00:07:17.501 [2024-07-15 13:52:55.363072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.501 [2024-07-15 13:52:55.363105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.501 [2024-07-15 13:52:55.363168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.501 [2024-07-15 13:52:55.363184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.501 [2024-07-15 13:52:55.363256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.501 [2024-07-15 13:52:55.363276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.501 #18 NEW cov: 12190 ft: 13585 corp: 5/210b lim: 90 exec/s: 0 rss: 72Mb L: 56/56 MS: 1 ShuffleBytes- 00:07:17.501 [2024-07-15 13:52:55.422775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.501 [2024-07-15 13:52:55.422804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.501 [2024-07-15 13:52:55.422888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.501 [2024-07-15 13:52:55.422908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.501 #21 NEW cov: 12190 ft: 13621 corp: 6/247b lim: 90 exec/s: 0 rss: 72Mb L: 37/56 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:17.501 [2024-07-15 13:52:55.473198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.501 [2024-07-15 13:52:55.473237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.501 [2024-07-15 13:52:55.473313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.501 [2024-07-15 13:52:55.473327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.501 [2024-07-15 13:52:55.473417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.501 [2024-07-15 13:52:55.473435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.501 #22 NEW cov: 12190 ft: 13743 corp: 7/304b lim: 90 exec/s: 0 rss: 72Mb L: 57/57 MS: 1 InsertByte- 00:07:17.501 [2024-07-15 13:52:55.533488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.501 [2024-07-15 13:52:55.533517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.501 [2024-07-15 13:52:55.533594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.501 [2024-07-15 13:52:55.533615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.501 [2024-07-15 13:52:55.533684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.501 [2024-07-15 13:52:55.533701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.501 #23 NEW cov: 12190 ft: 13802 corp: 8/360b lim: 90 exec/s: 0 rss: 72Mb L: 56/57 MS: 1 CMP- DE: "\000\000"- 00:07:17.760 [2024-07-15 13:52:55.583487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.760 [2024-07-15 13:52:55.583516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.760 [2024-07-15 13:52:55.583604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.760 [2024-07-15 13:52:55.583623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.760 #26 NEW cov: 12190 ft: 13968 corp: 9/398b lim: 90 exec/s: 0 rss: 72Mb L: 38/57 MS: 3 ChangeByte-ChangeByte-CrossOver- 00:07:17.760 [2024-07-15 13:52:55.633814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.760 [2024-07-15 13:52:55.633842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.760 [2024-07-15 13:52:55.633900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.760 [2024-07-15 13:52:55.633918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.760 [2024-07-15 13:52:55.633992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.760 [2024-07-15 13:52:55.634008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.760 #27 NEW cov: 12190 ft: 14021 corp: 10/454b lim: 90 exec/s: 0 rss: 72Mb L: 56/57 MS: 1 CopyPart- 00:07:17.760 [2024-07-15 13:52:55.693727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.760 [2024-07-15 13:52:55.693761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.760 [2024-07-15 13:52:55.693846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.760 [2024-07-15 13:52:55.693865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.760 #28 NEW cov: 12190 ft: 14115 corp: 11/497b lim: 90 exec/s: 0 rss: 72Mb L: 43/57 MS: 1 InsertByte- 00:07:17.760 [2024-07-15 13:52:55.754251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.760 [2024-07-15 13:52:55.754281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.760 [2024-07-15 13:52:55.754369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.760 [2024-07-15 13:52:55.754385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.760 [2024-07-15 13:52:55.754476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.760 [2024-07-15 13:52:55.754495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.760 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:17.760 #29 NEW cov: 12213 ft: 14151 corp: 12/553b lim: 90 exec/s: 0 rss: 73Mb L: 56/57 MS: 1 CMP- DE: "\007\000\000\000"- 00:07:17.760 [2024-07-15 13:52:55.803813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.760 [2024-07-15 13:52:55.803844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.760 #30 NEW cov: 12213 ft: 14958 corp: 13/583b lim: 90 exec/s: 0 rss: 73Mb L: 30/57 MS: 1 EraseBytes- 00:07:18.018 [2024-07-15 13:52:55.854548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.018 [2024-07-15 13:52:55.854582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.018 [2024-07-15 13:52:55.854649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.018 [2024-07-15 13:52:55.854666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:55.854726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.019 [2024-07-15 13:52:55.854744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.019 #36 NEW cov: 12213 ft: 15007 corp: 14/640b lim: 90 exec/s: 36 rss: 73Mb L: 57/57 MS: 1 InsertByte- 00:07:18.019 [2024-07-15 13:52:55.904743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.019 [2024-07-15 13:52:55.904773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:55.904851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.019 [2024-07-15 13:52:55.904867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:55.904959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.019 [2024-07-15 13:52:55.904976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.019 #37 NEW cov: 12213 ft: 15039 corp: 15/696b lim: 90 exec/s: 37 rss: 73Mb L: 56/57 MS: 1 ShuffleBytes- 00:07:18.019 [2024-07-15 13:52:55.955171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.019 [2024-07-15 13:52:55.955201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:55.955271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.019 [2024-07-15 13:52:55.955292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:55.955349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.019 [2024-07-15 13:52:55.955368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:55.955457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.019 [2024-07-15 13:52:55.955477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.019 #38 NEW cov: 12213 ft: 15418 corp: 16/774b lim: 90 exec/s: 38 rss: 73Mb L: 78/78 MS: 1 CrossOver- 00:07:18.019 [2024-07-15 13:52:56.015417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.019 [2024-07-15 13:52:56.015447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:56.015516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.019 [2024-07-15 13:52:56.015534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:56.015581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.019 [2024-07-15 13:52:56.015600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.019 [2024-07-15 13:52:56.015688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.019 [2024-07-15 13:52:56.015707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.019 #39 NEW cov: 12213 ft: 15426 corp: 17/856b lim: 90 exec/s: 39 rss: 73Mb L: 82/82 MS: 1 CopyPart- 00:07:18.019 [2024-07-15 13:52:56.074677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.019 [2024-07-15 13:52:56.074712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.278 #40 NEW cov: 12213 ft: 15507 corp: 18/880b lim: 90 exec/s: 40 rss: 73Mb L: 24/82 MS: 1 EraseBytes- 00:07:18.278 [2024-07-15 13:52:56.145204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.278 [2024-07-15 13:52:56.145241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.278 [2024-07-15 13:52:56.145342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.278 [2024-07-15 13:52:56.145360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.278 #41 NEW cov: 12213 ft: 15549 corp: 19/922b lim: 90 exec/s: 41 rss: 73Mb L: 42/82 MS: 1 ShuffleBytes- 00:07:18.278 [2024-07-15 13:52:56.195751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.278 [2024-07-15 13:52:56.195787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.278 [2024-07-15 13:52:56.195865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.278 [2024-07-15 13:52:56.195885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.278 [2024-07-15 13:52:56.195982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.278 [2024-07-15 13:52:56.196000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.278 #42 NEW cov: 12213 ft: 15610 corp: 20/979b lim: 90 exec/s: 42 rss: 73Mb L: 57/82 MS: 1 PersAutoDict- DE: "\007\000\000\000"- 00:07:18.278 [2024-07-15 13:52:56.265287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.278 [2024-07-15 13:52:56.265325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.278 #43 NEW cov: 12213 ft: 15614 corp: 21/1010b lim: 90 exec/s: 43 rss: 73Mb L: 31/82 MS: 1 InsertByte- 00:07:18.278 [2024-07-15 13:52:56.315817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.278 [2024-07-15 13:52:56.315852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.278 [2024-07-15 13:52:56.315917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.278 [2024-07-15 13:52:56.315935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.278 #44 NEW cov: 12213 ft: 15620 corp: 22/1051b lim: 90 exec/s: 44 rss: 73Mb L: 41/82 MS: 1 EraseBytes- 00:07:18.537 [2024-07-15 13:52:56.366352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.537 [2024-07-15 13:52:56.366385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.366483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.537 [2024-07-15 13:52:56.366503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.366595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.537 [2024-07-15 13:52:56.366616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.537 #45 NEW cov: 12213 ft: 15634 corp: 23/1107b lim: 90 exec/s: 45 rss: 73Mb L: 56/82 MS: 1 CMP- DE: ")?\000\000"- 00:07:18.537 [2024-07-15 13:52:56.416800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.537 [2024-07-15 13:52:56.416831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.416905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.537 [2024-07-15 13:52:56.416924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.416976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.537 [2024-07-15 13:52:56.416994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.417085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.537 [2024-07-15 13:52:56.417102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.537 #46 NEW cov: 12213 ft: 15647 corp: 24/1191b lim: 90 exec/s: 46 rss: 73Mb L: 84/84 MS: 1 CopyPart- 00:07:18.537 [2024-07-15 13:52:56.486778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.537 [2024-07-15 13:52:56.486816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.486918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.537 [2024-07-15 13:52:56.486939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.487026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.537 [2024-07-15 13:52:56.487042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.537 #47 NEW cov: 12213 ft: 15664 corp: 25/1247b lim: 90 exec/s: 47 rss: 73Mb L: 56/84 MS: 1 CopyPart- 00:07:18.537 [2024-07-15 13:52:56.546624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.537 [2024-07-15 13:52:56.546656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.546756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.537 [2024-07-15 13:52:56.546773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.537 #48 NEW cov: 12213 ft: 15686 corp: 26/1290b lim: 90 exec/s: 48 rss: 73Mb L: 43/84 MS: 1 ShuffleBytes- 00:07:18.537 [2024-07-15 13:52:56.606800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.537 [2024-07-15 13:52:56.606831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.537 [2024-07-15 13:52:56.606908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.537 [2024-07-15 13:52:56.606926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.796 #49 NEW cov: 12213 ft: 15699 corp: 27/1332b lim: 90 exec/s: 49 rss: 73Mb L: 42/84 MS: 1 ChangeBit- 00:07:18.796 [2024-07-15 13:52:56.656945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.796 [2024-07-15 13:52:56.656975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.796 [2024-07-15 13:52:56.657063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.796 [2024-07-15 13:52:56.657082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.796 #50 NEW cov: 12213 ft: 15709 corp: 28/1374b lim: 90 exec/s: 50 rss: 73Mb L: 42/84 MS: 1 PersAutoDict- DE: "\007\000\000\000"- 00:07:18.796 [2024-07-15 13:52:56.717117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.796 [2024-07-15 13:52:56.717147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.796 [2024-07-15 13:52:56.717230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.796 [2024-07-15 13:52:56.717257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.796 #51 NEW cov: 12213 ft: 15724 corp: 29/1417b lim: 90 exec/s: 51 rss: 73Mb L: 43/84 MS: 1 CopyPart- 00:07:18.796 [2024-07-15 13:52:56.777646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.796 [2024-07-15 13:52:56.777678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.796 [2024-07-15 13:52:56.777735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.796 [2024-07-15 13:52:56.777756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.796 [2024-07-15 13:52:56.777808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.796 [2024-07-15 13:52:56.777825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.796 #52 NEW cov: 12213 ft: 15744 corp: 30/1473b lim: 90 exec/s: 52 rss: 73Mb L: 56/84 MS: 1 ChangeByte- 00:07:18.796 [2024-07-15 13:52:56.828233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.796 [2024-07-15 13:52:56.828262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.796 [2024-07-15 13:52:56.828334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.796 [2024-07-15 13:52:56.828353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.796 [2024-07-15 13:52:56.828401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.796 [2024-07-15 13:52:56.828421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.796 [2024-07-15 13:52:56.828506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.796 [2024-07-15 13:52:56.828522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.796 #53 NEW cov: 12213 ft: 15761 corp: 31/1559b lim: 90 exec/s: 26 rss: 73Mb L: 86/86 MS: 1 CrossOver- 00:07:18.796 #53 DONE cov: 12213 ft: 15761 corp: 31/1559b lim: 90 exec/s: 26 rss: 73Mb 00:07:18.796 ###### Recommended dictionary. ###### 00:07:18.796 "\000\000" # Uses: 0 00:07:18.796 "\007\000\000\000" # Uses: 2 00:07:18.796 ")?\000\000" # Uses: 0 00:07:18.796 ###### End of recommended dictionary. ###### 00:07:18.796 Done 53 runs in 2 second(s) 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.055 13:52:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:19.055 13:52:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.055 13:52:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.055 13:52:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.055 13:52:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:19.055 [2024-07-15 13:52:57.038030] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:19.055 [2024-07-15 13:52:57.038099] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849183 ] 00:07:19.055 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.313 [2024-07-15 13:52:57.250865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.313 [2024-07-15 13:52:57.321390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.313 [2024-07-15 13:52:57.381237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.572 [2024-07-15 13:52:57.397516] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:19.572 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.572 INFO: Seed: 644283930 00:07:19.572 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:19.572 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:19.572 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.572 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.572 #2 INITED exec/s: 0 rss: 65Mb 00:07:19.572 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.572 This may also happen if the target rejected all inputs we tried so far 00:07:19.572 [2024-07-15 13:52:57.455782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.572 [2024-07-15 13:52:57.455813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.572 [2024-07-15 13:52:57.455862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.572 [2024-07-15 13:52:57.455878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.572 [2024-07-15 13:52:57.455933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.572 [2024-07-15 13:52:57.455948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.831 NEW_FUNC[1/697]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:19.831 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.831 #8 NEW cov: 11928 ft: 11945 corp: 2/36b lim: 50 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:19.831 [2024-07-15 13:52:57.806738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.831 [2024-07-15 13:52:57.806796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.831 [2024-07-15 13:52:57.806869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.831 [2024-07-15 13:52:57.806895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.831 [2024-07-15 13:52:57.806966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.831 [2024-07-15 13:52:57.806991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.831 #9 NEW cov: 12074 ft: 12574 corp: 3/71b lim: 50 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:19.831 [2024-07-15 13:52:57.866728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.831 [2024-07-15 13:52:57.866758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.831 [2024-07-15 13:52:57.866816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.831 [2024-07-15 13:52:57.866831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.831 [2024-07-15 13:52:57.866884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.831 [2024-07-15 13:52:57.866900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.831 #17 NEW cov: 12080 ft: 12886 corp: 4/106b lim: 50 exec/s: 0 rss: 72Mb L: 35/35 MS: 3 ChangeBit-ChangeBit-CrossOver- 00:07:20.089 [2024-07-15 13:52:57.906825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.089 [2024-07-15 13:52:57.906853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:57.906889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.089 [2024-07-15 13:52:57.906904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:57.906955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.089 [2024-07-15 13:52:57.906971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.089 #18 NEW cov: 12165 ft: 13133 corp: 5/141b lim: 50 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:20.089 [2024-07-15 13:52:57.956834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.089 [2024-07-15 13:52:57.956860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:57.956909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.089 [2024-07-15 13:52:57.956923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.089 #19 NEW cov: 12165 ft: 13512 corp: 6/162b lim: 50 exec/s: 0 rss: 72Mb L: 21/35 MS: 1 EraseBytes- 00:07:20.089 [2024-07-15 13:52:57.997070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.089 [2024-07-15 13:52:57.997096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:57.997130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.089 [2024-07-15 13:52:57.997144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:57.997197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.089 [2024-07-15 13:52:57.997212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.089 #20 NEW cov: 12165 ft: 13557 corp: 7/197b lim: 50 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:20.089 [2024-07-15 13:52:58.047210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.089 [2024-07-15 13:52:58.047244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:58.047306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.089 [2024-07-15 13:52:58.047324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:58.047375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.089 [2024-07-15 13:52:58.047390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.089 #21 NEW cov: 12165 ft: 13649 corp: 8/232b lim: 50 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:20.089 [2024-07-15 13:52:58.097361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.089 [2024-07-15 13:52:58.097386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:58.097430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.089 [2024-07-15 13:52:58.097445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:58.097496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.089 [2024-07-15 13:52:58.097526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.089 #22 NEW cov: 12165 ft: 13668 corp: 9/267b lim: 50 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:20.089 [2024-07-15 13:52:58.137458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.089 [2024-07-15 13:52:58.137486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:58.137539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.089 [2024-07-15 13:52:58.137554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.089 [2024-07-15 13:52:58.137605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.089 [2024-07-15 13:52:58.137621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.348 #23 NEW cov: 12165 ft: 13692 corp: 10/302b lim: 50 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:20.348 [2024-07-15 13:52:58.187669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.348 [2024-07-15 13:52:58.187696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.187752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.348 [2024-07-15 13:52:58.187768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.187821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.348 [2024-07-15 13:52:58.187836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.348 #24 NEW cov: 12165 ft: 13733 corp: 11/339b lim: 50 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 CrossOver- 00:07:20.348 [2024-07-15 13:52:58.237729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.348 [2024-07-15 13:52:58.237755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.237806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.348 [2024-07-15 13:52:58.237822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.237876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.348 [2024-07-15 13:52:58.237891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.348 #25 NEW cov: 12165 ft: 13737 corp: 12/376b lim: 50 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeBit- 00:07:20.348 [2024-07-15 13:52:58.287869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.348 [2024-07-15 13:52:58.287896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.287931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.348 [2024-07-15 13:52:58.287945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.287996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.348 [2024-07-15 13:52:58.288009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.348 #31 NEW cov: 12165 ft: 13764 corp: 13/412b lim: 50 exec/s: 0 rss: 73Mb L: 36/37 MS: 1 InsertByte- 00:07:20.348 [2024-07-15 13:52:58.338016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.348 [2024-07-15 13:52:58.338043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.338081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.348 [2024-07-15 13:52:58.338095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.338145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.348 [2024-07-15 13:52:58.338160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.348 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:20.348 #32 NEW cov: 12188 ft: 13796 corp: 14/449b lim: 50 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 ChangeByte- 00:07:20.348 [2024-07-15 13:52:58.388210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.348 [2024-07-15 13:52:58.388241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.388278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.348 [2024-07-15 13:52:58.388293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.348 [2024-07-15 13:52:58.388344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.348 [2024-07-15 13:52:58.388359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.348 #33 NEW cov: 12188 ft: 13875 corp: 15/484b lim: 50 exec/s: 0 rss: 73Mb L: 35/37 MS: 1 ChangeBinInt- 00:07:20.607 [2024-07-15 13:52:58.428282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.607 [2024-07-15 13:52:58.428308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.607 [2024-07-15 13:52:58.428359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.607 [2024-07-15 13:52:58.428375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.607 [2024-07-15 13:52:58.428428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.607 [2024-07-15 13:52:58.428446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.607 #34 NEW cov: 12188 ft: 13904 corp: 16/518b lim: 50 exec/s: 34 rss: 73Mb L: 34/37 MS: 1 EraseBytes- 00:07:20.607 [2024-07-15 13:52:58.468405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.607 [2024-07-15 13:52:58.468432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.607 [2024-07-15 13:52:58.468492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.607 [2024-07-15 13:52:58.468508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.468563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.608 [2024-07-15 13:52:58.468579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.608 #35 NEW cov: 12188 ft: 13986 corp: 17/555b lim: 50 exec/s: 35 rss: 73Mb L: 37/37 MS: 1 ChangeBinInt- 00:07:20.608 [2024-07-15 13:52:58.518381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.608 [2024-07-15 13:52:58.518408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.518465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.608 [2024-07-15 13:52:58.518480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.608 #36 NEW cov: 12188 ft: 14001 corp: 18/576b lim: 50 exec/s: 36 rss: 73Mb L: 21/37 MS: 1 CMP- DE: "\254x3\002\000\000\000\000"- 00:07:20.608 [2024-07-15 13:52:58.568859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.608 [2024-07-15 13:52:58.568886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.568923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.608 [2024-07-15 13:52:58.568938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.568989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.608 [2024-07-15 13:52:58.569005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.569056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.608 [2024-07-15 13:52:58.569069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.608 #37 NEW cov: 12188 ft: 14370 corp: 19/625b lim: 50 exec/s: 37 rss: 73Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:20.608 [2024-07-15 13:52:58.618658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.608 [2024-07-15 13:52:58.618684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.618721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.608 [2024-07-15 13:52:58.618736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.608 #38 NEW cov: 12188 ft: 14386 corp: 20/646b lim: 50 exec/s: 38 rss: 73Mb L: 21/49 MS: 1 ShuffleBytes- 00:07:20.608 [2024-07-15 13:52:58.669100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.608 [2024-07-15 13:52:58.669129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.669167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.608 [2024-07-15 13:52:58.669182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.669231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.608 [2024-07-15 13:52:58.669261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.608 [2024-07-15 13:52:58.669313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.608 [2024-07-15 13:52:58.669329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.867 #39 NEW cov: 12188 ft: 14429 corp: 21/694b lim: 50 exec/s: 39 rss: 73Mb L: 48/49 MS: 1 InsertRepeatedBytes- 00:07:20.867 [2024-07-15 13:52:58.709121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.867 [2024-07-15 13:52:58.709148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.709207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.867 [2024-07-15 13:52:58.709229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.709279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.867 [2024-07-15 13:52:58.709295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.867 #40 NEW cov: 12188 ft: 14456 corp: 22/730b lim: 50 exec/s: 40 rss: 73Mb L: 36/49 MS: 1 InsertByte- 00:07:20.867 [2024-07-15 13:52:58.749202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.867 [2024-07-15 13:52:58.749233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.749281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.867 [2024-07-15 13:52:58.749296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.749350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.867 [2024-07-15 13:52:58.749364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.867 #41 NEW cov: 12188 ft: 14488 corp: 23/765b lim: 50 exec/s: 41 rss: 73Mb L: 35/49 MS: 1 ShuffleBytes- 00:07:20.867 [2024-07-15 13:52:58.789324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.867 [2024-07-15 13:52:58.789351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.789407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.867 [2024-07-15 13:52:58.789423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.789474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.867 [2024-07-15 13:52:58.789489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.867 #42 NEW cov: 12188 ft: 14506 corp: 24/802b lim: 50 exec/s: 42 rss: 73Mb L: 37/49 MS: 1 ShuffleBytes- 00:07:20.867 [2024-07-15 13:52:58.829421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.867 [2024-07-15 13:52:58.829447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.829488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.867 [2024-07-15 13:52:58.829503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.829553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.867 [2024-07-15 13:52:58.829568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.867 #43 NEW cov: 12188 ft: 14512 corp: 25/837b lim: 50 exec/s: 43 rss: 73Mb L: 35/49 MS: 1 ChangeBit- 00:07:20.867 [2024-07-15 13:52:58.869364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.867 [2024-07-15 13:52:58.869392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.869427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.867 [2024-07-15 13:52:58.869442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.867 #44 NEW cov: 12188 ft: 14529 corp: 26/862b lim: 50 exec/s: 44 rss: 73Mb L: 25/49 MS: 1 CMP- DE: "\000\000\001>"- 00:07:20.867 [2024-07-15 13:52:58.909605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.867 [2024-07-15 13:52:58.909633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.909668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.867 [2024-07-15 13:52:58.909682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.867 [2024-07-15 13:52:58.909734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.867 [2024-07-15 13:52:58.909749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.127 #45 NEW cov: 12188 ft: 14536 corp: 27/899b lim: 50 exec/s: 45 rss: 73Mb L: 37/49 MS: 1 ShuffleBytes- 00:07:21.127 [2024-07-15 13:52:58.959740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.127 [2024-07-15 13:52:58.959767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:58.959819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.127 [2024-07-15 13:52:58.959835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:58.959886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.127 [2024-07-15 13:52:58.959902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.127 #46 NEW cov: 12188 ft: 14554 corp: 28/937b lim: 50 exec/s: 46 rss: 73Mb L: 38/49 MS: 1 InsertByte- 00:07:21.127 [2024-07-15 13:52:59.000026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.127 [2024-07-15 13:52:59.000052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.000106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.127 [2024-07-15 13:52:59.000121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.000174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.127 [2024-07-15 13:52:59.000189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.000246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.127 [2024-07-15 13:52:59.000262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.127 #47 NEW cov: 12188 ft: 14607 corp: 29/979b lim: 50 exec/s: 47 rss: 74Mb L: 42/49 MS: 1 InsertRepeatedBytes- 00:07:21.127 [2024-07-15 13:52:59.050000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.127 [2024-07-15 13:52:59.050027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.050063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.127 [2024-07-15 13:52:59.050079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.050130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.127 [2024-07-15 13:52:59.050145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.127 #48 NEW cov: 12188 ft: 14616 corp: 30/1014b lim: 50 exec/s: 48 rss: 74Mb L: 35/49 MS: 1 CMP- DE: "\031\227+\237\270\021\023\000"- 00:07:21.127 [2024-07-15 13:52:59.100409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.127 [2024-07-15 13:52:59.100437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.100497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.127 [2024-07-15 13:52:59.100514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.100565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.127 [2024-07-15 13:52:59.100581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.100632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.127 [2024-07-15 13:52:59.100648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.127 #49 NEW cov: 12188 ft: 14629 corp: 31/1063b lim: 50 exec/s: 49 rss: 74Mb L: 49/49 MS: 1 ChangeByte- 00:07:21.127 [2024-07-15 13:52:59.150170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.127 [2024-07-15 13:52:59.150198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.127 [2024-07-15 13:52:59.150273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.127 [2024-07-15 13:52:59.150289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.127 #50 NEW cov: 12188 ft: 14648 corp: 32/1084b lim: 50 exec/s: 50 rss: 74Mb L: 21/49 MS: 1 EraseBytes- 00:07:21.385 [2024-07-15 13:52:59.200505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.385 [2024-07-15 13:52:59.200536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.385 [2024-07-15 13:52:59.200570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.385 [2024-07-15 13:52:59.200586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.385 [2024-07-15 13:52:59.200638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.385 [2024-07-15 13:52:59.200654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.385 #51 NEW cov: 12188 ft: 14659 corp: 33/1119b lim: 50 exec/s: 51 rss: 74Mb L: 35/49 MS: 1 ChangeBinInt- 00:07:21.385 [2024-07-15 13:52:59.250616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.385 [2024-07-15 13:52:59.250642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.385 [2024-07-15 13:52:59.250695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.385 [2024-07-15 13:52:59.250711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.385 [2024-07-15 13:52:59.250763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.385 [2024-07-15 13:52:59.250778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.385 #52 NEW cov: 12188 ft: 14663 corp: 34/1156b lim: 50 exec/s: 52 rss: 74Mb L: 37/49 MS: 1 ChangeBinInt- 00:07:21.385 [2024-07-15 13:52:59.290699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.385 [2024-07-15 13:52:59.290726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.385 [2024-07-15 13:52:59.290761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.385 [2024-07-15 13:52:59.290776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.385 [2024-07-15 13:52:59.290829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.385 [2024-07-15 13:52:59.290844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.385 #53 NEW cov: 12188 ft: 14673 corp: 35/1190b lim: 50 exec/s: 53 rss: 75Mb L: 34/49 MS: 1 ChangeBit- 00:07:21.385 [2024-07-15 13:52:59.340660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.386 [2024-07-15 13:52:59.340686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.386 [2024-07-15 13:52:59.340738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.386 [2024-07-15 13:52:59.340753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.386 #54 NEW cov: 12188 ft: 14675 corp: 36/1211b lim: 50 exec/s: 54 rss: 75Mb L: 21/49 MS: 1 ShuffleBytes- 00:07:21.386 [2024-07-15 13:52:59.391144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.386 [2024-07-15 13:52:59.391170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.386 [2024-07-15 13:52:59.391215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.386 [2024-07-15 13:52:59.391250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.386 [2024-07-15 13:52:59.391304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.386 [2024-07-15 13:52:59.391320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.386 [2024-07-15 13:52:59.391370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.386 [2024-07-15 13:52:59.391385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.386 #55 NEW cov: 12188 ft: 14688 corp: 37/1254b lim: 50 exec/s: 55 rss: 75Mb L: 43/49 MS: 1 PersAutoDict- DE: "\031\227+\237\270\021\023\000"- 00:07:21.386 [2024-07-15 13:52:59.431049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.386 [2024-07-15 13:52:59.431076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.386 [2024-07-15 13:52:59.431138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.386 [2024-07-15 13:52:59.431154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.386 [2024-07-15 13:52:59.431206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.386 [2024-07-15 13:52:59.431226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.644 #56 NEW cov: 12188 ft: 14702 corp: 38/1290b lim: 50 exec/s: 28 rss: 75Mb L: 36/49 MS: 1 ChangeASCIIInt- 00:07:21.644 #56 DONE cov: 12188 ft: 14702 corp: 38/1290b lim: 50 exec/s: 28 rss: 75Mb 00:07:21.644 ###### Recommended dictionary. ###### 00:07:21.644 "\254x3\002\000\000\000\000" # Uses: 0 00:07:21.644 "\000\000\001>" # Uses: 0 00:07:21.644 "\031\227+\237\270\021\023\000" # Uses: 1 00:07:21.644 ###### End of recommended dictionary. ###### 00:07:21.644 Done 56 runs in 2 second(s) 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.644 13:52:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:21.644 [2024-07-15 13:52:59.646745] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:21.644 [2024-07-15 13:52:59.646830] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849554 ] 00:07:21.644 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.903 [2024-07-15 13:52:59.858485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.903 [2024-07-15 13:52:59.929036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.162 [2024-07-15 13:52:59.988635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.162 [2024-07-15 13:53:00.004933] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:22.162 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.162 INFO: Seed: 3254269304 00:07:22.162 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:22.162 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:22.162 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:22.162 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.162 #2 INITED exec/s: 0 rss: 65Mb 00:07:22.162 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.162 This may also happen if the target rejected all inputs we tried so far 00:07:22.162 [2024-07-15 13:53:00.070173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.162 [2024-07-15 13:53:00.070210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.162 [2024-07-15 13:53:00.070270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.162 [2024-07-15 13:53:00.070287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.420 NEW_FUNC[1/697]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:22.420 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.420 #7 NEW cov: 11970 ft: 11957 corp: 2/39b lim: 85 exec/s: 0 rss: 71Mb L: 38/38 MS: 5 CopyPart-CopyPart-ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:22.420 [2024-07-15 13:53:00.421289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.420 [2024-07-15 13:53:00.421356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.420 [2024-07-15 13:53:00.421450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.420 [2024-07-15 13:53:00.421478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.420 #13 NEW cov: 12100 ft: 12618 corp: 3/77b lim: 85 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CrossOver- 00:07:22.420 [2024-07-15 13:53:00.481031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.420 [2024-07-15 13:53:00.481060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.700 #16 NEW cov: 12106 ft: 13729 corp: 4/102b lim: 85 exec/s: 0 rss: 72Mb L: 25/38 MS: 3 CopyPart-ChangeBinInt-CrossOver- 00:07:22.700 [2024-07-15 13:53:00.521280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.700 [2024-07-15 13:53:00.521309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.700 [2024-07-15 13:53:00.521366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.700 [2024-07-15 13:53:00.521381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.700 #17 NEW cov: 12191 ft: 13965 corp: 5/142b lim: 85 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 CMP- DE: "\001\037"- 00:07:22.700 [2024-07-15 13:53:00.571298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.700 [2024-07-15 13:53:00.571337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.700 #18 NEW cov: 12191 ft: 14017 corp: 6/167b lim: 85 exec/s: 0 rss: 72Mb L: 25/40 MS: 1 ShuffleBytes- 00:07:22.701 [2024-07-15 13:53:00.621571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.701 [2024-07-15 13:53:00.621599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.701 [2024-07-15 13:53:00.621653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.701 [2024-07-15 13:53:00.621668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.701 #19 NEW cov: 12191 ft: 14138 corp: 7/205b lim: 85 exec/s: 0 rss: 72Mb L: 38/40 MS: 1 ChangeBinInt- 00:07:22.701 [2024-07-15 13:53:00.661537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.701 [2024-07-15 13:53:00.661565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.701 #20 NEW cov: 12191 ft: 14234 corp: 8/230b lim: 85 exec/s: 0 rss: 72Mb L: 25/40 MS: 1 ChangeByte- 00:07:22.701 [2024-07-15 13:53:00.701753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.701 [2024-07-15 13:53:00.701780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.701 [2024-07-15 13:53:00.701849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.701 [2024-07-15 13:53:00.701865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.701 #21 NEW cov: 12191 ft: 14296 corp: 9/269b lim: 85 exec/s: 0 rss: 72Mb L: 39/40 MS: 1 InsertByte- 00:07:22.701 [2024-07-15 13:53:00.742193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.701 [2024-07-15 13:53:00.742222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.701 [2024-07-15 13:53:00.742285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.701 [2024-07-15 13:53:00.742301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.701 [2024-07-15 13:53:00.742355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.701 [2024-07-15 13:53:00.742371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.701 [2024-07-15 13:53:00.742424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.701 [2024-07-15 13:53:00.742440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.959 #22 NEW cov: 12191 ft: 14820 corp: 10/349b lim: 85 exec/s: 0 rss: 72Mb L: 80/80 MS: 1 CopyPart- 00:07:22.959 [2024-07-15 13:53:00.791883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.959 [2024-07-15 13:53:00.791912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.959 #23 NEW cov: 12191 ft: 14899 corp: 11/368b lim: 85 exec/s: 0 rss: 72Mb L: 19/80 MS: 1 EraseBytes- 00:07:22.959 [2024-07-15 13:53:00.832134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.959 [2024-07-15 13:53:00.832161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.959 [2024-07-15 13:53:00.832213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.959 [2024-07-15 13:53:00.832235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.959 #24 NEW cov: 12191 ft: 14907 corp: 12/406b lim: 85 exec/s: 0 rss: 72Mb L: 38/80 MS: 1 ShuffleBytes- 00:07:22.959 [2024-07-15 13:53:00.872100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.959 [2024-07-15 13:53:00.872127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.959 #25 NEW cov: 12191 ft: 15027 corp: 13/425b lim: 85 exec/s: 0 rss: 72Mb L: 19/80 MS: 1 PersAutoDict- DE: "\001\037"- 00:07:22.959 [2024-07-15 13:53:00.922259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.959 [2024-07-15 13:53:00.922287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.959 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:22.959 #26 NEW cov: 12214 ft: 15125 corp: 14/458b lim: 85 exec/s: 0 rss: 72Mb L: 33/80 MS: 1 CrossOver- 00:07:22.959 [2024-07-15 13:53:00.962356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.959 [2024-07-15 13:53:00.962388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.959 #27 NEW cov: 12214 ft: 15153 corp: 15/483b lim: 85 exec/s: 0 rss: 73Mb L: 25/80 MS: 1 ChangeBinInt- 00:07:22.959 [2024-07-15 13:53:01.012653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.959 [2024-07-15 13:53:01.012681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.959 [2024-07-15 13:53:01.012722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.959 [2024-07-15 13:53:01.012737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.217 #28 NEW cov: 12214 ft: 15167 corp: 16/521b lim: 85 exec/s: 0 rss: 73Mb L: 38/80 MS: 1 ShuffleBytes- 00:07:23.217 [2024-07-15 13:53:01.062773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.217 [2024-07-15 13:53:01.062800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.217 [2024-07-15 13:53:01.062866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.217 [2024-07-15 13:53:01.062882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.217 #29 NEW cov: 12214 ft: 15233 corp: 17/563b lim: 85 exec/s: 29 rss: 73Mb L: 42/80 MS: 1 PersAutoDict- DE: "\001\037"- 00:07:23.217 [2024-07-15 13:53:01.102764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.217 [2024-07-15 13:53:01.102792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.217 #30 NEW cov: 12214 ft: 15295 corp: 18/588b lim: 85 exec/s: 30 rss: 73Mb L: 25/80 MS: 1 ChangeByte- 00:07:23.218 [2024-07-15 13:53:01.153036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.218 [2024-07-15 13:53:01.153063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.218 [2024-07-15 13:53:01.153101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.218 [2024-07-15 13:53:01.153117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.218 #31 NEW cov: 12214 ft: 15297 corp: 19/627b lim: 85 exec/s: 31 rss: 73Mb L: 39/80 MS: 1 ChangeBinInt- 00:07:23.218 [2024-07-15 13:53:01.203026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.218 [2024-07-15 13:53:01.203053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.218 #32 NEW cov: 12214 ft: 15328 corp: 20/655b lim: 85 exec/s: 32 rss: 73Mb L: 28/80 MS: 1 EraseBytes- 00:07:23.218 [2024-07-15 13:53:01.253285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.218 [2024-07-15 13:53:01.253312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.218 [2024-07-15 13:53:01.253365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.218 [2024-07-15 13:53:01.253381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.218 #33 NEW cov: 12214 ft: 15352 corp: 21/693b lim: 85 exec/s: 33 rss: 73Mb L: 38/80 MS: 1 CopyPart- 00:07:23.476 [2024-07-15 13:53:01.303291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.476 [2024-07-15 13:53:01.303329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.476 #34 NEW cov: 12214 ft: 15393 corp: 22/725b lim: 85 exec/s: 34 rss: 73Mb L: 32/80 MS: 1 CopyPart- 00:07:23.476 [2024-07-15 13:53:01.343380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.476 [2024-07-15 13:53:01.343407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.476 #35 NEW cov: 12214 ft: 15413 corp: 23/744b lim: 85 exec/s: 35 rss: 73Mb L: 19/80 MS: 1 ChangeBit- 00:07:23.476 [2024-07-15 13:53:01.393950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.476 [2024-07-15 13:53:01.393977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.476 [2024-07-15 13:53:01.394045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.476 [2024-07-15 13:53:01.394060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.476 [2024-07-15 13:53:01.394114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.476 [2024-07-15 13:53:01.394130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.476 [2024-07-15 13:53:01.394184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.476 [2024-07-15 13:53:01.394199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.476 #36 NEW cov: 12214 ft: 15425 corp: 24/818b lim: 85 exec/s: 36 rss: 73Mb L: 74/80 MS: 1 InsertRepeatedBytes- 00:07:23.476 [2024-07-15 13:53:01.433609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.476 [2024-07-15 13:53:01.433635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.476 #37 NEW cov: 12214 ft: 15438 corp: 25/850b lim: 85 exec/s: 37 rss: 73Mb L: 32/80 MS: 1 ChangeByte- 00:07:23.476 [2024-07-15 13:53:01.483901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.476 [2024-07-15 13:53:01.483926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.476 [2024-07-15 13:53:01.483975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.476 [2024-07-15 13:53:01.483991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.476 #38 NEW cov: 12214 ft: 15440 corp: 26/888b lim: 85 exec/s: 38 rss: 73Mb L: 38/80 MS: 1 CopyPart- 00:07:23.476 [2024-07-15 13:53:01.523861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.476 [2024-07-15 13:53:01.523888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.476 #39 NEW cov: 12214 ft: 15482 corp: 27/915b lim: 85 exec/s: 39 rss: 73Mb L: 27/80 MS: 1 PersAutoDict- DE: "\001\037"- 00:07:23.735 [2024-07-15 13:53:01.564144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.735 [2024-07-15 13:53:01.564170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.735 [2024-07-15 13:53:01.564227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.735 [2024-07-15 13:53:01.564242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.735 #40 NEW cov: 12214 ft: 15487 corp: 28/954b lim: 85 exec/s: 40 rss: 73Mb L: 39/80 MS: 1 ChangeBit- 00:07:23.735 [2024-07-15 13:53:01.604450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.735 [2024-07-15 13:53:01.604476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.735 [2024-07-15 13:53:01.604514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.735 [2024-07-15 13:53:01.604528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.735 [2024-07-15 13:53:01.604582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.735 [2024-07-15 13:53:01.604596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.735 #41 NEW cov: 12214 ft: 15757 corp: 29/1008b lim: 85 exec/s: 41 rss: 73Mb L: 54/80 MS: 1 InsertRepeatedBytes- 00:07:23.735 [2024-07-15 13:53:01.644372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.735 [2024-07-15 13:53:01.644399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.735 [2024-07-15 13:53:01.644452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.735 [2024-07-15 13:53:01.644469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.735 #42 NEW cov: 12214 ft: 15759 corp: 30/1046b lim: 85 exec/s: 42 rss: 73Mb L: 38/80 MS: 1 ChangeBit- 00:07:23.735 [2024-07-15 13:53:01.684546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.735 [2024-07-15 13:53:01.684574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.735 [2024-07-15 13:53:01.684636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.735 [2024-07-15 13:53:01.684655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.735 #43 NEW cov: 12214 ft: 15799 corp: 31/1084b lim: 85 exec/s: 43 rss: 74Mb L: 38/80 MS: 1 PersAutoDict- DE: "\001\037"- 00:07:23.735 [2024-07-15 13:53:01.724451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.735 [2024-07-15 13:53:01.724479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.735 #44 NEW cov: 12214 ft: 15854 corp: 32/1110b lim: 85 exec/s: 44 rss: 74Mb L: 26/80 MS: 1 InsertByte- 00:07:23.735 [2024-07-15 13:53:01.764577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.735 [2024-07-15 13:53:01.764603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.735 #50 NEW cov: 12214 ft: 15857 corp: 33/1143b lim: 85 exec/s: 50 rss: 74Mb L: 33/80 MS: 1 InsertByte- 00:07:23.994 [2024-07-15 13:53:01.814707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.994 [2024-07-15 13:53:01.814734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.994 #51 NEW cov: 12214 ft: 15868 corp: 34/1168b lim: 85 exec/s: 51 rss: 74Mb L: 25/80 MS: 1 PersAutoDict- DE: "\001\037"- 00:07:23.994 [2024-07-15 13:53:01.854955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.994 [2024-07-15 13:53:01.854983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.994 [2024-07-15 13:53:01.855033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.994 [2024-07-15 13:53:01.855049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.994 #52 NEW cov: 12214 ft: 15872 corp: 35/1208b lim: 85 exec/s: 52 rss: 74Mb L: 40/80 MS: 1 ShuffleBytes- 00:07:23.994 [2024-07-15 13:53:01.895084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.994 [2024-07-15 13:53:01.895111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.994 [2024-07-15 13:53:01.895166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.994 [2024-07-15 13:53:01.895180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.994 #53 NEW cov: 12214 ft: 15889 corp: 36/1248b lim: 85 exec/s: 53 rss: 74Mb L: 40/80 MS: 1 ChangeByte- 00:07:23.994 [2024-07-15 13:53:01.945238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.994 [2024-07-15 13:53:01.945266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.994 [2024-07-15 13:53:01.945303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.994 [2024-07-15 13:53:01.945318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.994 #56 NEW cov: 12214 ft: 15901 corp: 37/1289b lim: 85 exec/s: 56 rss: 74Mb L: 41/80 MS: 3 InsertByte-CopyPart-InsertRepeatedBytes- 00:07:23.994 [2024-07-15 13:53:01.985190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.994 [2024-07-15 13:53:01.985223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.994 #57 NEW cov: 12214 ft: 15951 corp: 38/1317b lim: 85 exec/s: 57 rss: 74Mb L: 28/80 MS: 1 EraseBytes- 00:07:23.994 [2024-07-15 13:53:02.025425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.994 [2024-07-15 13:53:02.025456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.994 [2024-07-15 13:53:02.025503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.994 [2024-07-15 13:53:02.025517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.994 #58 NEW cov: 12214 ft: 15956 corp: 39/1355b lim: 85 exec/s: 29 rss: 74Mb L: 38/80 MS: 1 ChangeBinInt- 00:07:23.994 #58 DONE cov: 12214 ft: 15956 corp: 39/1355b lim: 85 exec/s: 29 rss: 74Mb 00:07:23.994 ###### Recommended dictionary. ###### 00:07:23.994 "\001\037" # Uses: 5 00:07:23.994 ###### End of recommended dictionary. ###### 00:07:23.994 Done 58 runs in 2 second(s) 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.254 13:53:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:24.254 [2024-07-15 13:53:02.242498] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:24.254 [2024-07-15 13:53:02.242573] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849926 ] 00:07:24.254 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.513 [2024-07-15 13:53:02.446662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.513 [2024-07-15 13:53:02.517004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.513 [2024-07-15 13:53:02.576724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.772 [2024-07-15 13:53:02.593016] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:24.772 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.772 INFO: Seed: 1545318066 00:07:24.772 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:24.772 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:24.772 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.772 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.772 #2 INITED exec/s: 0 rss: 65Mb 00:07:24.772 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.772 This may also happen if the target rejected all inputs we tried so far 00:07:24.772 [2024-07-15 13:53:02.651560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.772 [2024-07-15 13:53:02.651600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.031 NEW_FUNC[1/696]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:25.031 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.031 #11 NEW cov: 11903 ft: 11904 corp: 2/10b lim: 25 exec/s: 0 rss: 71Mb L: 9/9 MS: 4 CopyPart-EraseBytes-ChangeByte-InsertRepeatedBytes- 00:07:25.031 [2024-07-15 13:53:02.992486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.031 [2024-07-15 13:53:02.992546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.031 #12 NEW cov: 12033 ft: 12464 corp: 3/19b lim: 25 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:07:25.031 [2024-07-15 13:53:03.052447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.031 [2024-07-15 13:53:03.052477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.031 #18 NEW cov: 12039 ft: 12863 corp: 4/24b lim: 25 exec/s: 0 rss: 72Mb L: 5/9 MS: 1 InsertRepeatedBytes- 00:07:25.031 [2024-07-15 13:53:03.092529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.031 [2024-07-15 13:53:03.092559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.289 #19 NEW cov: 12124 ft: 13165 corp: 5/29b lim: 25 exec/s: 0 rss: 72Mb L: 5/9 MS: 1 ChangeBit- 00:07:25.289 [2024-07-15 13:53:03.142792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.289 [2024-07-15 13:53:03.142821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.289 [2024-07-15 13:53:03.142875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.289 [2024-07-15 13:53:03.142889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.289 #22 NEW cov: 12124 ft: 13668 corp: 6/39b lim: 25 exec/s: 0 rss: 72Mb L: 10/10 MS: 3 ChangeBit-CopyPart-CMP- DE: "\001\023\021\272\316\366\214\200"- 00:07:25.289 [2024-07-15 13:53:03.182920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.289 [2024-07-15 13:53:03.182948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.289 [2024-07-15 13:53:03.182988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.289 [2024-07-15 13:53:03.183004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.289 #23 NEW cov: 12124 ft: 13754 corp: 7/50b lim: 25 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 InsertByte- 00:07:25.289 [2024-07-15 13:53:03.233035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.289 [2024-07-15 13:53:03.233063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.289 [2024-07-15 13:53:03.233131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.289 [2024-07-15 13:53:03.233150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.289 #24 NEW cov: 12124 ft: 13808 corp: 8/61b lim: 25 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 ChangeBit- 00:07:25.289 [2024-07-15 13:53:03.283031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.289 [2024-07-15 13:53:03.283059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.289 #25 NEW cov: 12124 ft: 13910 corp: 9/67b lim: 25 exec/s: 0 rss: 72Mb L: 6/11 MS: 1 InsertByte- 00:07:25.289 [2024-07-15 13:53:03.323151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.289 [2024-07-15 13:53:03.323178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.289 #26 NEW cov: 12124 ft: 13963 corp: 10/73b lim: 25 exec/s: 0 rss: 72Mb L: 6/11 MS: 1 ChangeBinInt- 00:07:25.547 [2024-07-15 13:53:03.373587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.547 [2024-07-15 13:53:03.373614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.547 [2024-07-15 13:53:03.373659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.547 [2024-07-15 13:53:03.373673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.547 #27 NEW cov: 12124 ft: 14007 corp: 11/83b lim: 25 exec/s: 0 rss: 72Mb L: 10/11 MS: 1 InsertByte- 00:07:25.547 [2024-07-15 13:53:03.423767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.547 [2024-07-15 13:53:03.423794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.547 [2024-07-15 13:53:03.423848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.547 [2024-07-15 13:53:03.423863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.547 #28 NEW cov: 12124 ft: 14045 corp: 12/97b lim: 25 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 PersAutoDict- DE: "\001\023\021\272\316\366\214\200"- 00:07:25.547 [2024-07-15 13:53:03.473898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.547 [2024-07-15 13:53:03.473926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.547 [2024-07-15 13:53:03.473963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.547 [2024-07-15 13:53:03.473979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.547 #29 NEW cov: 12124 ft: 14085 corp: 13/107b lim: 25 exec/s: 0 rss: 72Mb L: 10/14 MS: 1 CopyPart- 00:07:25.547 [2024-07-15 13:53:03.514113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.547 [2024-07-15 13:53:03.514141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.547 [2024-07-15 13:53:03.514188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.547 [2024-07-15 13:53:03.514203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.547 [2024-07-15 13:53:03.514264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.547 [2024-07-15 13:53:03.514280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.547 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:25.547 #30 NEW cov: 12147 ft: 14387 corp: 14/124b lim: 25 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 PersAutoDict- DE: "\001\023\021\272\316\366\214\200"- 00:07:25.547 [2024-07-15 13:53:03.553984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.547 [2024-07-15 13:53:03.554010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.547 #31 NEW cov: 12147 ft: 14434 corp: 15/129b lim: 25 exec/s: 0 rss: 73Mb L: 5/17 MS: 1 ShuffleBytes- 00:07:25.547 [2024-07-15 13:53:03.604353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.547 [2024-07-15 13:53:03.604379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.547 [2024-07-15 13:53:03.604426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.547 [2024-07-15 13:53:03.604442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.547 [2024-07-15 13:53:03.604496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.547 [2024-07-15 13:53:03.604512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.806 #32 NEW cov: 12147 ft: 14501 corp: 16/145b lim: 25 exec/s: 32 rss: 73Mb L: 16/17 MS: 1 CopyPart- 00:07:25.806 [2024-07-15 13:53:03.654398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.806 [2024-07-15 13:53:03.654424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.654464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.806 [2024-07-15 13:53:03.654479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.806 #33 NEW cov: 12147 ft: 14522 corp: 17/156b lim: 25 exec/s: 33 rss: 73Mb L: 11/17 MS: 1 InsertByte- 00:07:25.806 [2024-07-15 13:53:03.704527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.806 [2024-07-15 13:53:03.704554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.704611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.806 [2024-07-15 13:53:03.704627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.806 #34 NEW cov: 12147 ft: 14577 corp: 18/166b lim: 25 exec/s: 34 rss: 73Mb L: 10/17 MS: 1 ChangeBit- 00:07:25.806 [2024-07-15 13:53:03.754674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.806 [2024-07-15 13:53:03.754700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.754756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.806 [2024-07-15 13:53:03.754771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.806 #35 NEW cov: 12147 ft: 14621 corp: 19/177b lim: 25 exec/s: 35 rss: 73Mb L: 11/17 MS: 1 ChangeBinInt- 00:07:25.806 [2024-07-15 13:53:03.805044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.806 [2024-07-15 13:53:03.805072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.805131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.806 [2024-07-15 13:53:03.805147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.805203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.806 [2024-07-15 13:53:03.805223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.805279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.806 [2024-07-15 13:53:03.805293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.806 #36 NEW cov: 12147 ft: 15050 corp: 20/197b lim: 25 exec/s: 36 rss: 73Mb L: 20/20 MS: 1 CrossOver- 00:07:25.806 [2024-07-15 13:53:03.855055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.806 [2024-07-15 13:53:03.855080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.855121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.806 [2024-07-15 13:53:03.855137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.806 [2024-07-15 13:53:03.855192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.806 [2024-07-15 13:53:03.855208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.064 #37 NEW cov: 12147 ft: 15082 corp: 21/214b lim: 25 exec/s: 37 rss: 73Mb L: 17/20 MS: 1 ChangeBit- 00:07:26.064 [2024-07-15 13:53:03.905223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.064 [2024-07-15 13:53:03.905250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.064 [2024-07-15 13:53:03.905316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.064 [2024-07-15 13:53:03.905332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.064 [2024-07-15 13:53:03.905388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.064 [2024-07-15 13:53:03.905403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.064 #38 NEW cov: 12147 ft: 15104 corp: 22/230b lim: 25 exec/s: 38 rss: 73Mb L: 16/20 MS: 1 ChangeByte- 00:07:26.064 [2024-07-15 13:53:03.955232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.064 [2024-07-15 13:53:03.955259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.064 [2024-07-15 13:53:03.955313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.064 [2024-07-15 13:53:03.955330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.064 #40 NEW cov: 12147 ft: 15125 corp: 23/244b lim: 25 exec/s: 40 rss: 73Mb L: 14/20 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:26.064 [2024-07-15 13:53:03.995359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.064 [2024-07-15 13:53:03.995386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.064 [2024-07-15 13:53:03.995438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.064 [2024-07-15 13:53:03.995457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.064 #41 NEW cov: 12147 ft: 15165 corp: 24/255b lim: 25 exec/s: 41 rss: 73Mb L: 11/20 MS: 1 ChangeBinInt- 00:07:26.064 [2024-07-15 13:53:04.035588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.064 [2024-07-15 13:53:04.035614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.064 [2024-07-15 13:53:04.035660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.064 [2024-07-15 13:53:04.035676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.064 [2024-07-15 13:53:04.035732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.064 [2024-07-15 13:53:04.035748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.064 #42 NEW cov: 12147 ft: 15168 corp: 25/272b lim: 25 exec/s: 42 rss: 74Mb L: 17/20 MS: 1 ChangeBit- 00:07:26.064 [2024-07-15 13:53:04.085622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.064 [2024-07-15 13:53:04.085648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.065 [2024-07-15 13:53:04.085701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.065 [2024-07-15 13:53:04.085715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.065 #43 NEW cov: 12147 ft: 15183 corp: 26/285b lim: 25 exec/s: 43 rss: 74Mb L: 13/20 MS: 1 PersAutoDict- DE: "\001\023\021\272\316\366\214\200"- 00:07:26.065 [2024-07-15 13:53:04.135740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.065 [2024-07-15 13:53:04.135768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.065 [2024-07-15 13:53:04.135806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.323 [2024-07-15 13:53:04.135823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.323 #44 NEW cov: 12147 ft: 15197 corp: 27/295b lim: 25 exec/s: 44 rss: 74Mb L: 10/20 MS: 1 ChangeByte- 00:07:26.323 [2024-07-15 13:53:04.175828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.323 [2024-07-15 13:53:04.175855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.323 [2024-07-15 13:53:04.175898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.323 [2024-07-15 13:53:04.175914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.323 #45 NEW cov: 12147 ft: 15215 corp: 28/306b lim: 25 exec/s: 45 rss: 74Mb L: 11/20 MS: 1 ShuffleBytes- 00:07:26.323 [2024-07-15 13:53:04.215834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.323 [2024-07-15 13:53:04.215861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.323 #46 NEW cov: 12147 ft: 15228 corp: 29/315b lim: 25 exec/s: 46 rss: 74Mb L: 9/20 MS: 1 CrossOver- 00:07:26.323 [2024-07-15 13:53:04.256073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.323 [2024-07-15 13:53:04.256102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.323 [2024-07-15 13:53:04.256177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.324 [2024-07-15 13:53:04.256194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.324 #47 NEW cov: 12147 ft: 15240 corp: 30/325b lim: 25 exec/s: 47 rss: 74Mb L: 10/20 MS: 1 ChangeByte- 00:07:26.324 [2024-07-15 13:53:04.296443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.324 [2024-07-15 13:53:04.296470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.324 [2024-07-15 13:53:04.296522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.324 [2024-07-15 13:53:04.296537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.324 [2024-07-15 13:53:04.296591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.324 [2024-07-15 13:53:04.296607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.324 [2024-07-15 13:53:04.296661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.324 [2024-07-15 13:53:04.296677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.324 #48 NEW cov: 12147 ft: 15249 corp: 31/346b lim: 25 exec/s: 48 rss: 74Mb L: 21/21 MS: 1 CrossOver- 00:07:26.324 [2024-07-15 13:53:04.336299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.324 [2024-07-15 13:53:04.336325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.324 [2024-07-15 13:53:04.336382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.324 [2024-07-15 13:53:04.336398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.324 #49 NEW cov: 12147 ft: 15312 corp: 32/356b lim: 25 exec/s: 49 rss: 74Mb L: 10/21 MS: 1 EraseBytes- 00:07:26.324 [2024-07-15 13:53:04.376539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.324 [2024-07-15 13:53:04.376565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.324 [2024-07-15 13:53:04.376604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.324 [2024-07-15 13:53:04.376620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.324 [2024-07-15 13:53:04.376675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.324 [2024-07-15 13:53:04.376690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.582 #50 NEW cov: 12147 ft: 15322 corp: 33/375b lim: 25 exec/s: 50 rss: 74Mb L: 19/21 MS: 1 PersAutoDict- DE: "\001\023\021\272\316\366\214\200"- 00:07:26.582 [2024-07-15 13:53:04.426701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.582 [2024-07-15 13:53:04.426728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.582 [2024-07-15 13:53:04.426769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.582 [2024-07-15 13:53:04.426784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.582 [2024-07-15 13:53:04.426842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.583 [2024-07-15 13:53:04.426860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.583 #51 NEW cov: 12147 ft: 15345 corp: 34/391b lim: 25 exec/s: 51 rss: 74Mb L: 16/21 MS: 1 CopyPart- 00:07:26.583 [2024-07-15 13:53:04.476778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.583 [2024-07-15 13:53:04.476806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.583 [2024-07-15 13:53:04.476868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.583 [2024-07-15 13:53:04.476884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.583 [2024-07-15 13:53:04.476939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.583 [2024-07-15 13:53:04.476953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.583 #52 NEW cov: 12147 ft: 15356 corp: 35/406b lim: 25 exec/s: 52 rss: 75Mb L: 15/21 MS: 1 InsertByte- 00:07:26.583 [2024-07-15 13:53:04.526850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.583 [2024-07-15 13:53:04.526878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.583 [2024-07-15 13:53:04.526935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.583 [2024-07-15 13:53:04.526951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.583 #53 NEW cov: 12147 ft: 15374 corp: 36/418b lim: 25 exec/s: 53 rss: 75Mb L: 12/21 MS: 1 InsertByte- 00:07:26.583 [2024-07-15 13:53:04.566915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.583 [2024-07-15 13:53:04.566943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.583 [2024-07-15 13:53:04.566997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.583 [2024-07-15 13:53:04.567013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.583 #54 NEW cov: 12147 ft: 15389 corp: 37/429b lim: 25 exec/s: 54 rss: 75Mb L: 11/21 MS: 1 ChangeBit- 00:07:26.583 [2024-07-15 13:53:04.617194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.583 [2024-07-15 13:53:04.617226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.583 [2024-07-15 13:53:04.617289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.583 [2024-07-15 13:53:04.617305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.583 [2024-07-15 13:53:04.617364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.583 [2024-07-15 13:53:04.617380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.583 #55 NEW cov: 12147 ft: 15392 corp: 38/446b lim: 25 exec/s: 27 rss: 75Mb L: 17/21 MS: 1 CMP- DE: "\000\355\356DmY\002\261"- 00:07:26.842 #55 DONE cov: 12147 ft: 15392 corp: 38/446b lim: 25 exec/s: 27 rss: 75Mb 00:07:26.842 ###### Recommended dictionary. ###### 00:07:26.842 "\001\023\021\272\316\366\214\200" # Uses: 4 00:07:26.842 "\000\355\356DmY\002\261" # Uses: 0 00:07:26.842 ###### End of recommended dictionary. ###### 00:07:26.842 Done 55 runs in 2 second(s) 00:07:26.842 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.842 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.842 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.842 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.843 13:53:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:26.843 [2024-07-15 13:53:04.835861] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:26.843 [2024-07-15 13:53:04.835935] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850291 ] 00:07:26.843 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.102 [2024-07-15 13:53:05.042758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.102 [2024-07-15 13:53:05.113026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.102 [2024-07-15 13:53:05.172391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.362 [2024-07-15 13:53:05.188687] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:27.362 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.362 INFO: Seed: 4142326726 00:07:27.362 INFO: Loaded 1 modules (357840 inline 8-bit counters): 357840 [0x29ab28c, 0x2a0285c), 00:07:27.362 INFO: Loaded 1 PC tables (357840 PCs): 357840 [0x2a02860,0x2f78560), 00:07:27.362 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:27.362 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.362 #2 INITED exec/s: 0 rss: 65Mb 00:07:27.362 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.362 This may also happen if the target rejected all inputs we tried so far 00:07:27.362 [2024-07-15 13:53:05.253955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.362 [2024-07-15 13:53:05.253987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.362 [2024-07-15 13:53:05.254059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.362 [2024-07-15 13:53:05.254078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.621 NEW_FUNC[1/697]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:27.621 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.621 #7 NEW cov: 11968 ft: 11976 corp: 2/44b lim: 100 exec/s: 0 rss: 72Mb L: 43/43 MS: 5 CrossOver-ChangeByte-ChangeBinInt-ChangeBinInt-InsertRepeatedBytes- 00:07:27.621 [2024-07-15 13:53:05.595013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.621 [2024-07-15 13:53:05.595077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.621 [2024-07-15 13:53:05.595165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.621 [2024-07-15 13:53:05.595196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.621 #8 NEW cov: 12105 ft: 12676 corp: 3/87b lim: 100 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 ChangeBit- 00:07:27.621 [2024-07-15 13:53:05.654905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.621 [2024-07-15 13:53:05.654932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.621 [2024-07-15 13:53:05.654982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073692774399 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.621 [2024-07-15 13:53:05.654998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.621 #9 NEW cov: 12111 ft: 12850 corp: 4/130b lim: 100 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 ChangeBit- 00:07:27.880 [2024-07-15 13:53:05.704926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.704953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.880 #15 NEW cov: 12196 ft: 13958 corp: 5/153b lim: 100 exec/s: 0 rss: 72Mb L: 23/43 MS: 1 EraseBytes- 00:07:27.880 [2024-07-15 13:53:05.745162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.745190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.880 [2024-07-15 13:53:05.745248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.745264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.880 #16 NEW cov: 12196 ft: 14117 corp: 6/196b lim: 100 exec/s: 0 rss: 72Mb L: 43/43 MS: 1 CopyPart- 00:07:27.880 [2024-07-15 13:53:05.785156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6755395506667520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.785183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.880 #17 NEW cov: 12196 ft: 14223 corp: 7/219b lim: 100 exec/s: 0 rss: 72Mb L: 23/43 MS: 1 ChangeBinInt- 00:07:27.880 [2024-07-15 13:53:05.835461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.835492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.880 [2024-07-15 13:53:05.835565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.835582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.880 #18 NEW cov: 12196 ft: 14319 corp: 8/270b lim: 100 exec/s: 0 rss: 72Mb L: 51/51 MS: 1 InsertRepeatedBytes- 00:07:27.880 [2024-07-15 13:53:05.885383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6755395506667520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.885412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.880 #19 NEW cov: 12196 ft: 14420 corp: 9/293b lim: 100 exec/s: 0 rss: 72Mb L: 23/51 MS: 1 ChangeByte- 00:07:27.880 [2024-07-15 13:53:05.935531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.880 [2024-07-15 13:53:05.935559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.139 #20 NEW cov: 12196 ft: 14504 corp: 10/316b lim: 100 exec/s: 0 rss: 72Mb L: 23/51 MS: 1 ShuffleBytes- 00:07:28.139 [2024-07-15 13:53:05.975648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775188223 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:05.975676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.139 #24 NEW cov: 12196 ft: 14544 corp: 11/344b lim: 100 exec/s: 0 rss: 72Mb L: 28/51 MS: 4 EraseBytes-ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:28.139 [2024-07-15 13:53:06.015785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:26384774036992 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.015813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.139 #25 NEW cov: 12196 ft: 14565 corp: 12/368b lim: 100 exec/s: 0 rss: 73Mb L: 24/51 MS: 1 InsertByte- 00:07:28.139 [2024-07-15 13:53:06.056042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.056071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.139 [2024-07-15 13:53:06.056141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.056157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.139 #26 NEW cov: 12196 ft: 14626 corp: 13/425b lim: 100 exec/s: 0 rss: 73Mb L: 57/57 MS: 1 CopyPart- 00:07:28.139 [2024-07-15 13:53:06.106479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.106509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.139 [2024-07-15 13:53:06.106556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.106572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.139 [2024-07-15 13:53:06.106628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.106649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.139 [2024-07-15 13:53:06.106704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.106719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.139 NEW_FUNC[1/1]: 0x1a7e690 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.139 #27 NEW cov: 12219 ft: 15089 corp: 14/507b lim: 100 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:07:28.139 [2024-07-15 13:53:06.156209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775188223 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.156244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.139 #28 NEW cov: 12219 ft: 15108 corp: 15/536b lim: 100 exec/s: 0 rss: 73Mb L: 29/82 MS: 1 InsertByte- 00:07:28.139 [2024-07-15 13:53:06.206324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6755395506667520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.139 [2024-07-15 13:53:06.206352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.398 #29 NEW cov: 12219 ft: 15124 corp: 16/559b lim: 100 exec/s: 0 rss: 73Mb L: 23/82 MS: 1 ShuffleBytes- 00:07:28.398 [2024-07-15 13:53:06.246897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.246926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.246968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.246984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.247036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.247052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.247106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.247122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.398 #30 NEW cov: 12219 ft: 15131 corp: 17/658b lim: 100 exec/s: 30 rss: 73Mb L: 99/99 MS: 1 CopyPart- 00:07:28.398 [2024-07-15 13:53:06.297017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.297047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.297081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.297097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.297153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.297173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.297235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.297251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.398 #31 NEW cov: 12219 ft: 15153 corp: 18/740b lim: 100 exec/s: 31 rss: 73Mb L: 82/99 MS: 1 CopyPart- 00:07:28.398 [2024-07-15 13:53:06.346860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.346887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.346938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.346953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.398 #32 NEW cov: 12219 ft: 15168 corp: 19/783b lim: 100 exec/s: 32 rss: 73Mb L: 43/99 MS: 1 ShuffleBytes- 00:07:28.398 [2024-07-15 13:53:06.387101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.387128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.387174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446743523936960511 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.387190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.387248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.387280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.398 #33 NEW cov: 12219 ft: 15472 corp: 20/859b lim: 100 exec/s: 33 rss: 73Mb L: 76/99 MS: 1 CopyPart- 00:07:28.398 [2024-07-15 13:53:06.437556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.437583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.437640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.437653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.437724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.437740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.437796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.398 [2024-07-15 13:53:06.437811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.398 [2024-07-15 13:53:06.437866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.399 [2024-07-15 13:53:06.437884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.657 #34 NEW cov: 12219 ft: 15523 corp: 21/959b lim: 100 exec/s: 34 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:07:28.657 [2024-07-15 13:53:06.487112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.487138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.657 #35 NEW cov: 12219 ft: 15587 corp: 22/996b lim: 100 exec/s: 35 rss: 73Mb L: 37/100 MS: 1 EraseBytes- 00:07:28.657 [2024-07-15 13:53:06.527228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6755395506667520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.527255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.657 #36 NEW cov: 12219 ft: 15612 corp: 23/1020b lim: 100 exec/s: 36 rss: 73Mb L: 24/100 MS: 1 InsertByte- 00:07:28.657 [2024-07-15 13:53:06.577366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.577392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.657 #37 NEW cov: 12219 ft: 15640 corp: 24/1043b lim: 100 exec/s: 37 rss: 73Mb L: 23/100 MS: 1 CrossOver- 00:07:28.657 [2024-07-15 13:53:06.618066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.618092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.657 [2024-07-15 13:53:06.618144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.618159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.657 [2024-07-15 13:53:06.618212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.618231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.657 [2024-07-15 13:53:06.618300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.618316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.657 [2024-07-15 13:53:06.618371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.618387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.657 #38 NEW cov: 12219 ft: 15650 corp: 25/1143b lim: 100 exec/s: 38 rss: 73Mb L: 100/100 MS: 1 ChangeByte- 00:07:28.657 [2024-07-15 13:53:06.668040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:352321536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.668066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.657 [2024-07-15 13:53:06.668136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.668152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.657 [2024-07-15 13:53:06.668210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.657 [2024-07-15 13:53:06.668230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.657 [2024-07-15 13:53:06.668283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9079256848778919936 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.658 [2024-07-15 13:53:06.668299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.658 #39 NEW cov: 12219 ft: 15656 corp: 26/1230b lim: 100 exec/s: 39 rss: 73Mb L: 87/100 MS: 1 InsertRepeatedBytes- 00:07:28.658 [2024-07-15 13:53:06.717891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.658 [2024-07-15 13:53:06.717918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.658 [2024-07-15 13:53:06.717972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.658 [2024-07-15 13:53:06.717989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.917 #45 NEW cov: 12219 ft: 15666 corp: 27/1287b lim: 100 exec/s: 45 rss: 73Mb L: 57/100 MS: 1 ShuffleBytes- 00:07:28.917 [2024-07-15 13:53:06.757814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:6755395506667520 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.757841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.917 #46 NEW cov: 12219 ft: 15708 corp: 28/1310b lim: 100 exec/s: 46 rss: 73Mb L: 23/100 MS: 1 ShuffleBytes- 00:07:28.917 [2024-07-15 13:53:06.808125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18380736405325111551 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.808151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.917 [2024-07-15 13:53:06.808204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.808223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.917 #47 NEW cov: 12219 ft: 15724 corp: 29/1362b lim: 100 exec/s: 47 rss: 74Mb L: 52/100 MS: 1 CopyPart- 00:07:28.917 [2024-07-15 13:53:06.858439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.858466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.917 [2024-07-15 13:53:06.858519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446743523936960511 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.858535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.917 [2024-07-15 13:53:06.858588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.858604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.917 #48 NEW cov: 12219 ft: 15744 corp: 30/1438b lim: 100 exec/s: 48 rss: 74Mb L: 76/100 MS: 1 ChangeBit- 00:07:28.917 [2024-07-15 13:53:06.908768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.908794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.917 [2024-07-15 13:53:06.908860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.908877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.917 [2024-07-15 13:53:06.908931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.908947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.917 [2024-07-15 13:53:06.909002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.909017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.917 #49 NEW cov: 12219 ft: 15762 corp: 31/1520b lim: 100 exec/s: 49 rss: 74Mb L: 82/100 MS: 1 ChangeBit- 00:07:28.917 [2024-07-15 13:53:06.958522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775188223 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.958549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.917 [2024-07-15 13:53:06.958607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446486233937870847 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.917 [2024-07-15 13:53:06.958623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.917 #50 NEW cov: 12219 ft: 15785 corp: 32/1574b lim: 100 exec/s: 50 rss: 74Mb L: 54/100 MS: 1 CopyPart- 00:07:29.176 [2024-07-15 13:53:06.998804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:06.998831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.176 [2024-07-15 13:53:06.998868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:06.998884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.176 [2024-07-15 13:53:06.998956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:06.998972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.176 #51 NEW cov: 12219 ft: 15797 corp: 33/1646b lim: 100 exec/s: 51 rss: 74Mb L: 72/100 MS: 1 EraseBytes- 00:07:29.176 [2024-07-15 13:53:07.038766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.038793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.176 [2024-07-15 13:53:07.038829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073692774399 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.038845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.176 #52 NEW cov: 12219 ft: 15806 corp: 34/1689b lim: 100 exec/s: 52 rss: 74Mb L: 43/100 MS: 1 ChangeBinInt- 00:07:29.176 [2024-07-15 13:53:07.078738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744052595359743 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.078766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.176 #53 NEW cov: 12219 ft: 15817 corp: 35/1712b lim: 100 exec/s: 53 rss: 74Mb L: 23/100 MS: 1 ChangeBit- 00:07:29.176 [2024-07-15 13:53:07.129014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.129041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.176 [2024-07-15 13:53:07.129096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.129112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.176 #59 NEW cov: 12219 ft: 15836 corp: 36/1755b lim: 100 exec/s: 59 rss: 74Mb L: 43/100 MS: 1 CopyPart- 00:07:29.176 [2024-07-15 13:53:07.179174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069775228927 len:65408 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.179200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.176 [2024-07-15 13:53:07.179252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.179269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.176 #60 NEW cov: 12219 ft: 15838 corp: 37/1798b lim: 100 exec/s: 60 rss: 74Mb L: 43/100 MS: 1 ShuffleBytes- 00:07:29.176 [2024-07-15 13:53:07.219425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16927600440536394474 len:60139 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.219454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.176 [2024-07-15 13:53:07.219490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.219506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.176 [2024-07-15 13:53:07.219559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18158513697557839871 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.176 [2024-07-15 13:53:07.219574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.435 #61 NEW cov: 12219 ft: 15848 corp: 38/1858b lim: 100 exec/s: 30 rss: 74Mb L: 60/100 MS: 1 InsertRepeatedBytes- 00:07:29.435 #61 DONE cov: 12219 ft: 15848 corp: 38/1858b lim: 100 exec/s: 30 rss: 74Mb 00:07:29.436 Done 61 runs in 2 second(s) 00:07:29.436 13:53:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.436 13:53:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.436 13:53:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.436 13:53:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:29.436 00:07:29.436 real 1m5.583s 00:07:29.436 user 1m40.977s 00:07:29.436 sys 0m7.903s 00:07:29.436 13:53:07 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.436 13:53:07 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.436 ************************************ 00:07:29.436 END TEST nvmf_llvm_fuzz 00:07:29.436 ************************************ 00:07:29.436 13:53:07 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:29.436 13:53:07 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:29.436 13:53:07 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:29.436 13:53:07 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:29.436 13:53:07 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.436 13:53:07 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.436 13:53:07 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.436 ************************************ 00:07:29.436 START TEST vfio_llvm_fuzz 00:07:29.436 ************************************ 00:07:29.436 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:29.697 * Looking for test storage... 00:07:29.697 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:29.697 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:29.698 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:29.698 #define SPDK_CONFIG_H 00:07:29.698 #define SPDK_CONFIG_APPS 1 00:07:29.698 #define SPDK_CONFIG_ARCH native 00:07:29.698 #undef SPDK_CONFIG_ASAN 00:07:29.698 #undef SPDK_CONFIG_AVAHI 00:07:29.698 #undef SPDK_CONFIG_CET 00:07:29.698 #define SPDK_CONFIG_COVERAGE 1 00:07:29.698 #define SPDK_CONFIG_CROSS_PREFIX 00:07:29.698 #undef SPDK_CONFIG_CRYPTO 00:07:29.698 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:29.698 #undef SPDK_CONFIG_CUSTOMOCF 00:07:29.698 #undef SPDK_CONFIG_DAOS 00:07:29.698 #define SPDK_CONFIG_DAOS_DIR 00:07:29.698 #define SPDK_CONFIG_DEBUG 1 00:07:29.698 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:29.698 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.698 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:29.698 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:29.698 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:29.698 #undef SPDK_CONFIG_DPDK_UADK 00:07:29.698 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.698 #define SPDK_CONFIG_EXAMPLES 1 00:07:29.698 #undef SPDK_CONFIG_FC 00:07:29.698 #define SPDK_CONFIG_FC_PATH 00:07:29.698 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:29.698 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:29.698 #undef SPDK_CONFIG_FUSE 00:07:29.698 #define SPDK_CONFIG_FUZZER 1 00:07:29.698 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:29.698 #undef SPDK_CONFIG_GOLANG 00:07:29.698 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:29.698 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:29.698 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:29.698 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:29.698 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:29.698 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:29.698 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:29.698 #define SPDK_CONFIG_IDXD 1 00:07:29.698 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:29.698 #undef SPDK_CONFIG_IPSEC_MB 00:07:29.698 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:29.698 #define SPDK_CONFIG_ISAL 1 00:07:29.698 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:29.698 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:29.698 #define SPDK_CONFIG_LIBDIR 00:07:29.698 #undef SPDK_CONFIG_LTO 00:07:29.698 #define SPDK_CONFIG_MAX_LCORES 128 00:07:29.698 #define SPDK_CONFIG_NVME_CUSE 1 00:07:29.698 #undef SPDK_CONFIG_OCF 00:07:29.698 #define SPDK_CONFIG_OCF_PATH 00:07:29.698 #define SPDK_CONFIG_OPENSSL_PATH 00:07:29.698 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:29.698 #define SPDK_CONFIG_PGO_DIR 00:07:29.698 #undef SPDK_CONFIG_PGO_USE 00:07:29.698 #define SPDK_CONFIG_PREFIX /usr/local 00:07:29.698 #undef SPDK_CONFIG_RAID5F 00:07:29.698 #undef SPDK_CONFIG_RBD 00:07:29.699 #define SPDK_CONFIG_RDMA 1 00:07:29.699 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:29.699 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:29.699 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:29.699 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:29.699 #undef SPDK_CONFIG_SHARED 00:07:29.699 #undef SPDK_CONFIG_SMA 00:07:29.699 #define SPDK_CONFIG_TESTS 1 00:07:29.699 #undef SPDK_CONFIG_TSAN 00:07:29.699 #define SPDK_CONFIG_UBLK 1 00:07:29.699 #define SPDK_CONFIG_UBSAN 1 00:07:29.699 #undef SPDK_CONFIG_UNIT_TESTS 00:07:29.699 #undef SPDK_CONFIG_URING 00:07:29.699 #define SPDK_CONFIG_URING_PATH 00:07:29.699 #undef SPDK_CONFIG_URING_ZNS 00:07:29.699 #undef SPDK_CONFIG_USDT 00:07:29.699 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:29.699 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:29.699 #define SPDK_CONFIG_VFIO_USER 1 00:07:29.699 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:29.699 #define SPDK_CONFIG_VHOST 1 00:07:29.699 #define SPDK_CONFIG_VIRTIO 1 00:07:29.699 #undef SPDK_CONFIG_VTUNE 00:07:29.699 #define SPDK_CONFIG_VTUNE_DIR 00:07:29.699 #define SPDK_CONFIG_WERROR 1 00:07:29.699 #define SPDK_CONFIG_WPDK_DIR 00:07:29.699 #undef SPDK_CONFIG_XNVME 00:07:29.699 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:29.699 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:29.700 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2850682 ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 2850682 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Pau7Oa 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.Pau7Oa/tests/vfio /tmp/spdk.Pau7Oa 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=945618944 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4338810880 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=50215669760 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742551040 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11526881280 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866563072 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871273472 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342714368 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348510208 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5795840 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870765568 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871277568 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=512000 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174248960 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174253056 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:29.701 * Looking for test storage... 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=50215669760 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=13741473792 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:29.701 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:29.702 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:29.702 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:29.702 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:29.702 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:29.961 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:29.961 13:53:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:29.961 [2024-07-15 13:53:07.810076] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:29.961 [2024-07-15 13:53:07.810165] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850739 ] 00:07:29.961 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.961 [2024-07-15 13:53:07.899076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.961 [2024-07-15 13:53:07.988712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.220 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.220 INFO: Seed: 2838355695 00:07:30.220 INFO: Loaded 1 modules (355076 inline 8-bit counters): 355076 [0x296cacc, 0x29c35d0), 00:07:30.220 INFO: Loaded 1 PC tables (355076 PCs): 355076 [0x29c35d0,0x2f2e610), 00:07:30.220 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:30.220 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.220 #2 INITED exec/s: 0 rss: 66Mb 00:07:30.220 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.220 This may also happen if the target rejected all inputs we tried so far 00:07:30.220 [2024-07-15 13:53:08.251381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:30.737 NEW_FUNC[1/658]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:30.737 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:30.737 #17 NEW cov: 10960 ft: 10522 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 5 ChangeByte-CrossOver-ChangeBit-CrossOver-CopyPart- 00:07:30.737 #18 NEW cov: 10974 ft: 13754 corp: 3/13b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:30.996 #24 NEW cov: 10977 ft: 14151 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:07:30.996 NEW_FUNC[1/1]: 0x1a4abc0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:30.996 #25 NEW cov: 10994 ft: 14975 corp: 5/25b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:07:31.255 #26 NEW cov: 10994 ft: 15212 corp: 6/31b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:31.255 #31 NEW cov: 10994 ft: 15310 corp: 7/37b lim: 6 exec/s: 31 rss: 74Mb L: 6/6 MS: 5 EraseBytes-ChangeBit-ChangeByte-CrossOver-InsertByte- 00:07:31.513 #32 NEW cov: 10994 ft: 16626 corp: 8/43b lim: 6 exec/s: 32 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:07:31.514 #33 NEW cov: 10994 ft: 16969 corp: 9/49b lim: 6 exec/s: 33 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:07:31.773 #34 NEW cov: 10994 ft: 17143 corp: 10/55b lim: 6 exec/s: 34 rss: 74Mb L: 6/6 MS: 1 CMP- DE: "\000\001"- 00:07:31.773 #35 NEW cov: 10994 ft: 17212 corp: 11/61b lim: 6 exec/s: 35 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:07:32.032 #36 NEW cov: 10994 ft: 17464 corp: 12/67b lim: 6 exec/s: 36 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:32.032 #42 NEW cov: 11001 ft: 17549 corp: 13/73b lim: 6 exec/s: 42 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:07:32.292 #43 NEW cov: 11001 ft: 18294 corp: 14/79b lim: 6 exec/s: 43 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:07:32.551 #44 NEW cov: 11001 ft: 18862 corp: 15/85b lim: 6 exec/s: 22 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:32.551 #44 DONE cov: 11001 ft: 18862 corp: 15/85b lim: 6 exec/s: 22 rss: 74Mb 00:07:32.551 ###### Recommended dictionary. ###### 00:07:32.551 "\000\001" # Uses: 1 00:07:32.551 ###### End of recommended dictionary. ###### 00:07:32.551 Done 44 runs in 2 second(s) 00:07:32.551 [2024-07-15 13:53:10.413449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:32.810 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.810 13:53:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:32.810 [2024-07-15 13:53:10.731664] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:32.810 [2024-07-15 13:53:10.731745] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851114 ] 00:07:32.810 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.810 [2024-07-15 13:53:10.821747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.078 [2024-07-15 13:53:10.908850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.078 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.078 INFO: Seed: 1463400801 00:07:33.078 INFO: Loaded 1 modules (355076 inline 8-bit counters): 355076 [0x296cacc, 0x29c35d0), 00:07:33.078 INFO: Loaded 1 PC tables (355076 PCs): 355076 [0x29c35d0,0x2f2e610), 00:07:33.078 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:33.078 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.078 #2 INITED exec/s: 0 rss: 65Mb 00:07:33.078 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.078 This may also happen if the target rejected all inputs we tried so far 00:07:33.338 [2024-07-15 13:53:11.169974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:33.338 [2024-07-15 13:53:11.244010] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.338 [2024-07-15 13:53:11.244036] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.338 [2024-07-15 13:53:11.244071] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.596 NEW_FUNC[1/660]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:33.596 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:33.596 #22 NEW cov: 10959 ft: 10545 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 5 ChangeBinInt-CrossOver-ChangeBit-InsertByte-CopyPart- 00:07:33.854 [2024-07-15 13:53:11.744616] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.854 [2024-07-15 13:53:11.744658] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.854 [2024-07-15 13:53:11.744693] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.854 #23 NEW cov: 10973 ft: 13587 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:34.113 [2024-07-15 13:53:11.939519] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.113 [2024-07-15 13:53:11.939544] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.113 [2024-07-15 13:53:11.939561] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.113 NEW_FUNC[1/1]: 0x1a4abc0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:34.113 #30 NEW cov: 10990 ft: 15224 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 2 EraseBytes-InsertByte- 00:07:34.113 [2024-07-15 13:53:12.145491] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.113 [2024-07-15 13:53:12.145514] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.113 [2024-07-15 13:53:12.145547] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.371 #31 NEW cov: 10990 ft: 16054 corp: 5/17b lim: 4 exec/s: 31 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:34.371 [2024-07-15 13:53:12.348712] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.371 [2024-07-15 13:53:12.348736] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.371 [2024-07-15 13:53:12.348753] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.653 #37 NEW cov: 10990 ft: 16347 corp: 6/21b lim: 4 exec/s: 37 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:34.653 [2024-07-15 13:53:12.548684] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.653 [2024-07-15 13:53:12.548707] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.653 [2024-07-15 13:53:12.548740] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.653 #38 NEW cov: 10990 ft: 16826 corp: 7/25b lim: 4 exec/s: 38 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:34.912 [2024-07-15 13:53:12.750209] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.912 [2024-07-15 13:53:12.750239] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.912 [2024-07-15 13:53:12.750258] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.912 #44 NEW cov: 10990 ft: 17184 corp: 8/29b lim: 4 exec/s: 44 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:34.912 [2024-07-15 13:53:12.955888] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.912 [2024-07-15 13:53:12.955912] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.912 [2024-07-15 13:53:12.955945] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.170 #45 NEW cov: 10997 ft: 17325 corp: 9/33b lim: 4 exec/s: 45 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:35.170 [2024-07-15 13:53:13.153585] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:35.170 [2024-07-15 13:53:13.153609] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:35.170 [2024-07-15 13:53:13.153626] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.428 #46 NEW cov: 10997 ft: 17670 corp: 10/37b lim: 4 exec/s: 23 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:35.429 #46 DONE cov: 10997 ft: 17670 corp: 10/37b lim: 4 exec/s: 23 rss: 74Mb 00:07:35.429 Done 46 runs in 2 second(s) 00:07:35.429 [2024-07-15 13:53:13.292449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:35.687 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:35.687 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.687 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.687 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:35.687 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:35.687 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:35.688 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.688 13:53:13 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:35.688 [2024-07-15 13:53:13.607166] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:35.688 [2024-07-15 13:53:13.607260] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851490 ] 00:07:35.688 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.688 [2024-07-15 13:53:13.697059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.945 [2024-07-15 13:53:13.786194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.945 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.945 INFO: Seed: 49417438 00:07:36.203 INFO: Loaded 1 modules (355076 inline 8-bit counters): 355076 [0x296cacc, 0x29c35d0), 00:07:36.203 INFO: Loaded 1 PC tables (355076 PCs): 355076 [0x29c35d0,0x2f2e610), 00:07:36.203 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:36.203 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.203 #2 INITED exec/s: 0 rss: 65Mb 00:07:36.203 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.203 This may also happen if the target rejected all inputs we tried so far 00:07:36.203 [2024-07-15 13:53:14.049079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:36.203 [2024-07-15 13:53:14.115509] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.524 NEW_FUNC[1/659]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:36.524 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.524 #6 NEW cov: 10934 ft: 10906 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 4 ChangeBit-ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:36.782 [2024-07-15 13:53:14.626109] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.782 #7 NEW cov: 10953 ft: 13404 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:07:36.782 [2024-07-15 13:53:14.810289] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.040 NEW_FUNC[1/1]: 0x1a4abc0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:37.040 #8 NEW cov: 10973 ft: 15129 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:07:37.040 [2024-07-15 13:53:15.015033] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.299 #9 NEW cov: 10973 ft: 15724 corp: 5/33b lim: 8 exec/s: 9 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:37.299 [2024-07-15 13:53:15.198115] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.299 #10 NEW cov: 10973 ft: 16333 corp: 6/41b lim: 8 exec/s: 10 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:37.557 [2024-07-15 13:53:15.392883] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.557 #11 NEW cov: 10973 ft: 16485 corp: 7/49b lim: 8 exec/s: 11 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:37.557 [2024-07-15 13:53:15.582467] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.815 #12 NEW cov: 10973 ft: 16745 corp: 8/57b lim: 8 exec/s: 12 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:37.815 [2024-07-15 13:53:15.773056] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.815 #13 NEW cov: 10980 ft: 17124 corp: 9/65b lim: 8 exec/s: 13 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:38.074 [2024-07-15 13:53:15.967145] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:38.074 #14 NEW cov: 10980 ft: 17490 corp: 10/73b lim: 8 exec/s: 7 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:38.074 #14 DONE cov: 10980 ft: 17490 corp: 10/73b lim: 8 exec/s: 7 rss: 74Mb 00:07:38.074 Done 14 runs in 2 second(s) 00:07:38.074 [2024-07-15 13:53:16.099441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:38.333 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:38.333 13:53:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:38.333 [2024-07-15 13:53:16.398658] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:38.333 [2024-07-15 13:53:16.398736] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851860 ] 00:07:38.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.592 [2024-07-15 13:53:16.484705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.592 [2024-07-15 13:53:16.567187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.850 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.850 INFO: Seed: 2830428525 00:07:38.850 INFO: Loaded 1 modules (355076 inline 8-bit counters): 355076 [0x296cacc, 0x29c35d0), 00:07:38.850 INFO: Loaded 1 PC tables (355076 PCs): 355076 [0x29c35d0,0x2f2e610), 00:07:38.850 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.850 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.850 #2 INITED exec/s: 0 rss: 66Mb 00:07:38.850 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.850 This may also happen if the target rejected all inputs we tried so far 00:07:38.850 [2024-07-15 13:53:16.834885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:39.368 NEW_FUNC[1/658]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:39.368 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:39.368 #101 NEW cov: 10938 ft: 10915 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 CrossOver-ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:39.626 NEW_FUNC[1/1]: 0x1a45cb0 in reactor_post_process_lw_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:868 00:07:39.626 #107 NEW cov: 10964 ft: 14023 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:39.626 NEW_FUNC[1/1]: 0x1a4abc0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:39.626 #108 NEW cov: 10981 ft: 15545 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:39.884 #112 NEW cov: 10981 ft: 15945 corp: 5/129b lim: 32 exec/s: 112 rss: 74Mb L: 32/32 MS: 4 EraseBytes-InsertByte-InsertRepeatedBytes-CopyPart- 00:07:40.143 #113 NEW cov: 10981 ft: 16841 corp: 6/161b lim: 32 exec/s: 113 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\017\200\014\037\262\177\000\000"- 00:07:40.403 #114 NEW cov: 10981 ft: 17329 corp: 7/193b lim: 32 exec/s: 114 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:40.403 #115 NEW cov: 10981 ft: 17384 corp: 8/225b lim: 32 exec/s: 115 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:40.661 #116 NEW cov: 10988 ft: 17944 corp: 9/257b lim: 32 exec/s: 116 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:40.920 #117 NEW cov: 10988 ft: 18057 corp: 10/289b lim: 32 exec/s: 117 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:40.920 #118 NEW cov: 10988 ft: 18076 corp: 11/321b lim: 32 exec/s: 59 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:40.920 #118 DONE cov: 10988 ft: 18076 corp: 11/321b lim: 32 exec/s: 59 rss: 74Mb 00:07:40.920 ###### Recommended dictionary. ###### 00:07:40.920 "\017\200\014\037\262\177\000\000" # Uses: 0 00:07:40.920 ###### End of recommended dictionary. ###### 00:07:40.920 Done 118 runs in 2 second(s) 00:07:40.920 [2024-07-15 13:53:18.970429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:41.179 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:41.438 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:41.438 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:41.438 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.438 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:41.438 13:53:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:41.438 [2024-07-15 13:53:19.286238] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:41.438 [2024-07-15 13:53:19.286321] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852235 ] 00:07:41.438 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.438 [2024-07-15 13:53:19.375281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.438 [2024-07-15 13:53:19.459931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.697 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.697 INFO: Seed: 1424438505 00:07:41.697 INFO: Loaded 1 modules (355076 inline 8-bit counters): 355076 [0x296cacc, 0x29c35d0), 00:07:41.697 INFO: Loaded 1 PC tables (355076 PCs): 355076 [0x29c35d0,0x2f2e610), 00:07:41.697 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:41.697 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.697 #2 INITED exec/s: 0 rss: 65Mb 00:07:41.697 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.697 This may also happen if the target rejected all inputs we tried so far 00:07:41.697 [2024-07-15 13:53:19.719164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:42.214 NEW_FUNC[1/659]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:42.214 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:42.214 #82 NEW cov: 10949 ft: 10403 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 ChangeByte-ChangeBit-ChangeByte-InsertRepeatedBytes-CrossOver- 00:07:42.472 #98 NEW cov: 10966 ft: 13727 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:42.731 NEW_FUNC[1/1]: 0x1a4abc0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:42.731 #109 NEW cov: 10983 ft: 15460 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.989 #115 NEW cov: 10983 ft: 16427 corp: 5/129b lim: 32 exec/s: 115 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:42.989 #125 NEW cov: 10983 ft: 16702 corp: 6/161b lim: 32 exec/s: 125 rss: 74Mb L: 32/32 MS: 5 EraseBytes-CrossOver-CMP-ChangeBinInt-InsertRepeatedBytes- DE: "\000\002"- 00:07:43.247 #131 NEW cov: 10983 ft: 17111 corp: 7/193b lim: 32 exec/s: 131 rss: 74Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\002"- 00:07:43.506 #132 NEW cov: 10983 ft: 17734 corp: 8/225b lim: 32 exec/s: 132 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:43.764 #143 NEW cov: 10990 ft: 17962 corp: 9/257b lim: 32 exec/s: 143 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:43.764 #144 NEW cov: 10990 ft: 17986 corp: 10/289b lim: 32 exec/s: 72 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:43.764 #144 DONE cov: 10990 ft: 17986 corp: 10/289b lim: 32 exec/s: 72 rss: 74Mb 00:07:43.764 ###### Recommended dictionary. ###### 00:07:43.764 "\000\002" # Uses: 2 00:07:43.764 ###### End of recommended dictionary. ###### 00:07:43.764 Done 144 runs in 2 second(s) 00:07:43.764 [2024-07-15 13:53:21.808438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:44.023 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:44.024 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:44.024 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:44.024 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:44.024 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:44.024 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:44.024 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:44.024 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:44.024 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:44.284 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.284 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:44.284 13:53:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:44.284 [2024-07-15 13:53:22.127396] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:44.284 [2024-07-15 13:53:22.127502] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852608 ] 00:07:44.284 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.284 [2024-07-15 13:53:22.215587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.284 [2024-07-15 13:53:22.301620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.588 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.588 INFO: Seed: 4269438304 00:07:44.588 INFO: Loaded 1 modules (355076 inline 8-bit counters): 355076 [0x296cacc, 0x29c35d0), 00:07:44.588 INFO: Loaded 1 PC tables (355076 PCs): 355076 [0x29c35d0,0x2f2e610), 00:07:44.588 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:44.588 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.588 #2 INITED exec/s: 0 rss: 65Mb 00:07:44.588 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.588 This may also happen if the target rejected all inputs we tried so far 00:07:44.588 [2024-07-15 13:53:22.569928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:44.897 [2024-07-15 13:53:22.641569] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.897 [2024-07-15 13:53:22.641611] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.156 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:45.156 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:45.156 #37 NEW cov: 10961 ft: 10467 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 5 InsertRepeatedBytes-InsertByte-ChangeBit-InsertRepeatedBytes-CrossOver- 00:07:45.156 [2024-07-15 13:53:23.148851] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.156 [2024-07-15 13:53:23.148903] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.414 #53 NEW cov: 10975 ft: 14142 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:45.414 [2024-07-15 13:53:23.336883] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.414 [2024-07-15 13:53:23.336916] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.414 NEW_FUNC[1/1]: 0x1a4abc0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:45.414 #54 NEW cov: 10992 ft: 15705 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:07:45.672 [2024-07-15 13:53:23.542866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.672 [2024-07-15 13:53:23.542898] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.672 #60 NEW cov: 10992 ft: 16584 corp: 5/53b lim: 13 exec/s: 60 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:07:45.673 [2024-07-15 13:53:23.728808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.673 [2024-07-15 13:53:23.728839] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.930 #61 NEW cov: 10992 ft: 16831 corp: 6/66b lim: 13 exec/s: 61 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "M\026\000\000\000\000\000\000"- 00:07:45.930 [2024-07-15 13:53:23.913862] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.930 [2024-07-15 13:53:23.913894] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.188 #62 NEW cov: 10992 ft: 17221 corp: 7/79b lim: 13 exec/s: 62 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:46.188 [2024-07-15 13:53:24.106030] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.189 [2024-07-15 13:53:24.106059] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.189 #63 NEW cov: 10992 ft: 17299 corp: 8/92b lim: 13 exec/s: 63 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:46.446 [2024-07-15 13:53:24.289826] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.446 [2024-07-15 13:53:24.289857] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.446 #64 NEW cov: 10999 ft: 17566 corp: 9/105b lim: 13 exec/s: 64 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:46.446 [2024-07-15 13:53:24.481299] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.446 [2024-07-15 13:53:24.481329] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.705 #65 NEW cov: 10999 ft: 17957 corp: 10/118b lim: 13 exec/s: 32 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:46.705 #65 DONE cov: 10999 ft: 17957 corp: 10/118b lim: 13 exec/s: 32 rss: 74Mb 00:07:46.705 ###### Recommended dictionary. ###### 00:07:46.705 "M\026\000\000\000\000\000\000" # Uses: 0 00:07:46.705 ###### End of recommended dictionary. ###### 00:07:46.705 Done 65 runs in 2 second(s) 00:07:46.705 [2024-07-15 13:53:24.613448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:46.963 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:46.964 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:46.964 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.964 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:46.964 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:46.964 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.964 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:46.964 13:53:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:46.964 [2024-07-15 13:53:24.930207] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:46.964 [2024-07-15 13:53:24.930322] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852987 ] 00:07:46.964 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.964 [2024-07-15 13:53:25.018880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.223 [2024-07-15 13:53:25.099978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.223 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.223 INFO: Seed: 2747483982 00:07:47.482 INFO: Loaded 1 modules (355076 inline 8-bit counters): 355076 [0x296cacc, 0x29c35d0), 00:07:47.482 INFO: Loaded 1 PC tables (355076 PCs): 355076 [0x29c35d0,0x2f2e610), 00:07:47.482 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:47.482 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.482 #2 INITED exec/s: 0 rss: 66Mb 00:07:47.482 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.482 This may also happen if the target rejected all inputs we tried so far 00:07:47.482 [2024-07-15 13:53:25.342595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:47.482 [2024-07-15 13:53:25.414565] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.482 [2024-07-15 13:53:25.414597] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.050 NEW_FUNC[1/660]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:48.050 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:48.050 #50 NEW cov: 10952 ft: 10451 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 3 InsertRepeatedBytes-ChangeBit-CrossOver- 00:07:48.050 [2024-07-15 13:53:25.921378] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.050 [2024-07-15 13:53:25.921427] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.050 #56 NEW cov: 10967 ft: 13647 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:48.050 [2024-07-15 13:53:26.112728] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.050 [2024-07-15 13:53:26.112762] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.309 NEW_FUNC[1/1]: 0x1a4abc0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:48.309 #62 NEW cov: 10984 ft: 15120 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:48.309 [2024-07-15 13:53:26.318104] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.309 [2024-07-15 13:53:26.318135] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.567 #68 NEW cov: 10984 ft: 16462 corp: 5/37b lim: 9 exec/s: 68 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:07:48.567 [2024-07-15 13:53:26.519719] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.567 [2024-07-15 13:53:26.519750] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.567 #69 NEW cov: 10984 ft: 16703 corp: 6/46b lim: 9 exec/s: 69 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:07:48.826 [2024-07-15 13:53:26.709078] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.826 [2024-07-15 13:53:26.709109] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.826 #70 NEW cov: 10984 ft: 16847 corp: 7/55b lim: 9 exec/s: 70 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:49.084 [2024-07-15 13:53:26.898397] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:49.084 [2024-07-15 13:53:26.898431] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.084 #71 NEW cov: 10984 ft: 17032 corp: 8/64b lim: 9 exec/s: 71 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:07:49.084 [2024-07-15 13:53:27.094478] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:49.084 [2024-07-15 13:53:27.094509] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.342 #77 NEW cov: 10991 ft: 17418 corp: 9/73b lim: 9 exec/s: 77 rss: 74Mb L: 9/9 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:49.342 [2024-07-15 13:53:27.288806] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:49.342 [2024-07-15 13:53:27.288836] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.343 #83 NEW cov: 10991 ft: 17477 corp: 10/82b lim: 9 exec/s: 41 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:07:49.343 #83 DONE cov: 10991 ft: 17477 corp: 10/82b lim: 9 exec/s: 41 rss: 74Mb 00:07:49.343 ###### Recommended dictionary. ###### 00:07:49.343 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:49.343 ###### End of recommended dictionary. ###### 00:07:49.343 Done 83 runs in 2 second(s) 00:07:49.602 [2024-07-15 13:53:27.419441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:49.861 13:53:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:49.861 13:53:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.861 13:53:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.861 13:53:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:49.861 00:07:49.861 real 0m20.222s 00:07:49.861 user 0m27.852s 00:07:49.861 sys 0m2.126s 00:07:49.861 13:53:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.861 13:53:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:49.861 ************************************ 00:07:49.861 END TEST vfio_llvm_fuzz 00:07:49.861 ************************************ 00:07:49.861 13:53:27 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:49.861 13:53:27 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:49.861 00:07:49.861 real 1m26.099s 00:07:49.861 user 2m8.943s 00:07:49.861 sys 0m10.232s 00:07:49.861 13:53:27 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.861 13:53:27 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:49.861 ************************************ 00:07:49.861 END TEST llvm_fuzz 00:07:49.861 ************************************ 00:07:49.861 13:53:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:49.861 13:53:27 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:07:49.861 13:53:27 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:07:49.861 13:53:27 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:07:49.861 13:53:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.861 13:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.861 13:53:27 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:07:49.861 13:53:27 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:49.861 13:53:27 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:49.861 13:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:55.133 INFO: APP EXITING 00:07:55.133 INFO: killing all VMs 00:07:55.133 INFO: killing vhost app 00:07:55.133 WARN: no vhost pid file found 00:07:55.133 INFO: EXIT DONE 00:07:58.429 Waiting for block devices as requested 00:07:58.429 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:07:58.429 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:07:58.429 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:58.429 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:58.429 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:58.689 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:58.689 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:58.689 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:58.948 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:58.948 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:58.948 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:07:59.208 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:59.208 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:59.208 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:59.467 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:59.467 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:59.467 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:59.726 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:59.726 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:03.916 Cleaning 00:08:03.916 Removing: /dev/shm/spdk_tgt_trace.pid2824600 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2823989 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2824600 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2825161 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2825904 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2826119 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2826926 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2826946 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2827273 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2827506 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2827851 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2828166 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2828412 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2828619 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2828819 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2829043 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2829659 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2832358 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2832716 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2833196 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2833387 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2833787 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2833804 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2834291 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2834384 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2834599 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2834783 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2834993 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2835022 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2835469 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2835667 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2835873 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2836111 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2836336 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2836361 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2836593 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2836791 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2837002 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2837200 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2837405 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2837605 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2837810 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2838012 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2838220 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2838418 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2838617 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2838827 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2839027 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2839236 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2839462 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2839703 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2839945 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2840195 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2840414 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2840614 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2840824 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2840894 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2841297 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2841751 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2842084 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2842458 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2842823 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2843198 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2843573 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2843941 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2844327 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2844668 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2844968 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2845272 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2845645 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2846016 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2846390 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2846761 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2847125 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2847491 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2847782 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2848086 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2848440 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2848814 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2849183 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2849554 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2849926 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2850291 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2850739 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2851114 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2851490 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2851860 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2852235 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2852608 00:08:03.916 Removing: /var/run/dpdk/spdk_pid2852987 00:08:03.916 Clean 00:08:03.916 13:53:41 -- common/autotest_common.sh@1451 -- # return 0 00:08:03.916 13:53:41 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:08:03.916 13:53:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.916 13:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:03.916 13:53:41 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:08:03.916 13:53:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.916 13:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:03.916 13:53:41 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:03.916 13:53:41 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:03.916 13:53:41 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:03.916 13:53:41 -- spdk/autotest.sh@391 -- # hash lcov 00:08:03.916 13:53:41 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:04.175 13:53:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:04.175 13:53:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:04.175 13:53:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.175 13:53:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.175 13:53:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.175 13:53:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.175 13:53:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.175 13:53:42 -- paths/export.sh@5 -- $ export PATH 00:08:04.175 13:53:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.175 13:53:42 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:04.175 13:53:42 -- common/autobuild_common.sh@444 -- $ date +%s 00:08:04.175 13:53:42 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721044422.XXXXXX 00:08:04.175 13:53:42 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721044422.8A28sq 00:08:04.175 13:53:42 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:08:04.175 13:53:42 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:08:04.175 13:53:42 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:04.175 13:53:42 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:04.175 13:53:42 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:04.175 13:53:42 -- common/autobuild_common.sh@460 -- $ get_config_params 00:08:04.175 13:53:42 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:08:04.175 13:53:42 -- common/autotest_common.sh@10 -- $ set +x 00:08:04.175 13:53:42 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:04.175 13:53:42 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:08:04.175 13:53:42 -- pm/common@17 -- $ local monitor 00:08:04.175 13:53:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.175 13:53:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.175 13:53:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.175 13:53:42 -- pm/common@21 -- $ date +%s 00:08:04.175 13:53:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.175 13:53:42 -- pm/common@21 -- $ date +%s 00:08:04.175 13:53:42 -- pm/common@25 -- $ sleep 1 00:08:04.175 13:53:42 -- pm/common@21 -- $ date +%s 00:08:04.175 13:53:42 -- pm/common@21 -- $ date +%s 00:08:04.175 13:53:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044422 00:08:04.175 13:53:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044422 00:08:04.175 13:53:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044422 00:08:04.175 13:53:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044422 00:08:04.175 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044422_collect-vmstat.pm.log 00:08:04.175 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044422_collect-cpu-temp.pm.log 00:08:04.175 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044422_collect-cpu-load.pm.log 00:08:04.175 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044422_collect-bmc-pm.bmc.pm.log 00:08:05.112 13:53:43 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:08:05.112 13:53:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:08:05.112 13:53:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:05.112 13:53:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:05.112 13:53:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:05.112 13:53:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:05.112 13:53:43 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:05.112 13:53:43 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:05.112 13:53:43 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:05.112 13:53:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:05.112 13:53:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:05.112 13:53:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:05.112 13:53:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:05.112 13:53:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.112 13:53:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:05.112 13:53:43 -- pm/common@44 -- $ pid=2858797 00:08:05.112 13:53:43 -- pm/common@50 -- $ kill -TERM 2858797 00:08:05.112 13:53:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.112 13:53:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:05.112 13:53:43 -- pm/common@44 -- $ pid=2858800 00:08:05.112 13:53:43 -- pm/common@50 -- $ kill -TERM 2858800 00:08:05.112 13:53:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.112 13:53:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:05.112 13:53:43 -- pm/common@44 -- $ pid=2858801 00:08:05.112 13:53:43 -- pm/common@50 -- $ kill -TERM 2858801 00:08:05.112 13:53:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.112 13:53:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:05.112 13:53:43 -- pm/common@44 -- $ pid=2858839 00:08:05.112 13:53:43 -- pm/common@50 -- $ sudo -E kill -TERM 2858839 00:08:05.112 + [[ -n 2723604 ]] 00:08:05.112 + sudo kill 2723604 00:08:05.377 [Pipeline] } 00:08:05.394 [Pipeline] // stage 00:08:05.399 [Pipeline] } 00:08:05.416 [Pipeline] // timeout 00:08:05.422 [Pipeline] } 00:08:05.440 [Pipeline] // catchError 00:08:05.445 [Pipeline] } 00:08:05.463 [Pipeline] // wrap 00:08:05.469 [Pipeline] } 00:08:05.481 [Pipeline] // catchError 00:08:05.490 [Pipeline] stage 00:08:05.492 [Pipeline] { (Epilogue) 00:08:05.507 [Pipeline] catchError 00:08:05.509 [Pipeline] { 00:08:05.524 [Pipeline] echo 00:08:05.525 Cleanup processes 00:08:05.530 [Pipeline] sh 00:08:05.810 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:05.810 2858961 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:05.810 2859679 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:05.823 [Pipeline] sh 00:08:06.103 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:06.104 ++ grep -v 'sudo pgrep' 00:08:06.104 ++ awk '{print $1}' 00:08:06.104 + sudo kill -9 2858961 00:08:06.117 [Pipeline] sh 00:08:06.400 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:07.349 [Pipeline] sh 00:08:07.633 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:07.633 Artifacts sizes are good 00:08:07.646 [Pipeline] archiveArtifacts 00:08:07.653 Archiving artifacts 00:08:07.705 [Pipeline] sh 00:08:07.988 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:08.002 [Pipeline] cleanWs 00:08:08.010 [WS-CLEANUP] Deleting project workspace... 00:08:08.010 [WS-CLEANUP] Deferred wipeout is used... 00:08:08.016 [WS-CLEANUP] done 00:08:08.018 [Pipeline] } 00:08:08.030 [Pipeline] // catchError 00:08:08.039 [Pipeline] sh 00:08:08.378 + logger -p user.info -t JENKINS-CI 00:08:08.386 [Pipeline] } 00:08:08.401 [Pipeline] // stage 00:08:08.407 [Pipeline] } 00:08:08.423 [Pipeline] // node 00:08:08.428 [Pipeline] End of Pipeline 00:08:08.462 Finished: SUCCESS